Test Report: Docker_Linux_crio_arm64 22021

                    
                      714686ca7bbd77e34d847e892f53d4af2ede556f:2025-12-02:42609
                    
                

Test fail (63/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.74
44 TestAddons/parallel/Registry 16.59
45 TestAddons/parallel/RegistryCreds 0.5
46 TestAddons/parallel/Ingress 145.03
47 TestAddons/parallel/InspektorGadget 6.27
48 TestAddons/parallel/MetricsServer 5.39
50 TestAddons/parallel/CSI 40.89
51 TestAddons/parallel/Headlamp 3.38
52 TestAddons/parallel/CloudSpanner 5.25
53 TestAddons/parallel/LocalPath 8.41
54 TestAddons/parallel/NvidiaDevicePlugin 6.27
55 TestAddons/parallel/Yakd 5.26
106 TestFunctional/parallel/ServiceCmdConnect 603.57
134 TestFunctional/parallel/ServiceCmd/DeployApp 600.85
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
144 TestFunctional/parallel/ServiceCmd/Format 0.47
145 TestFunctional/parallel/ServiceCmd/URL 0.48
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 506.48
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.03
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.27
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.36
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.37
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.38
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.74
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.31
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.61
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.39
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.09
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 103.08
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.28
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.25
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.54
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 0.89
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.86
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.3
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.2
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 528.76
276 TestMultiControlPlane/serial/DeleteSecondaryNode 9.03
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.06
279 TestMultiControlPlane/serial/RestartCluster 374.39
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.5
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.01
293 TestJSONOutput/pause/Command 2.49
299 TestJSONOutput/unpause/Command 1.89
358 TestKubernetesUpgrade 794.48
384 TestPause/serial/Pause 6.23
432 TestStartStop/group/newest-cni/serial/FirstStart 7200.37
x
+
TestAddons/serial/Volcano (0.74s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable volcano --alsologtostderr -v=1: exit status 11 (744.300829ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:51:56.098284   11378 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:51:56.099046   11378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:51:56.099062   11378 out.go:374] Setting ErrFile to fd 2...
	I1202 18:51:56.099068   11378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:51:56.099390   11378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:51:56.099754   11378 mustload.go:66] Loading cluster: addons-391119
	I1202 18:51:56.100186   11378 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:51:56.100209   11378 addons.go:622] checking whether the cluster is paused
	I1202 18:51:56.100382   11378 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:51:56.100402   11378 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:51:56.100964   11378 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:51:56.141194   11378 ssh_runner.go:195] Run: systemctl --version
	I1202 18:51:56.141251   11378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:51:56.160500   11378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:51:56.264668   11378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:51:56.264765   11378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:51:56.295228   11378 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:51:56.295249   11378 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:51:56.295259   11378 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:51:56.295263   11378 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:51:56.295266   11378 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:51:56.295270   11378 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:51:56.295273   11378 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:51:56.295276   11378 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:51:56.295279   11378 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:51:56.295285   11378 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:51:56.295288   11378 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:51:56.295292   11378 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:51:56.295295   11378 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:51:56.295297   11378 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:51:56.295301   11378 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:51:56.295306   11378 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:51:56.295314   11378 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:51:56.295318   11378 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:51:56.295321   11378 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:51:56.295324   11378 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:51:56.295328   11378 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:51:56.295332   11378 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:51:56.295335   11378 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:51:56.295338   11378 cri.go:89] found id: ""
	I1202 18:51:56.295400   11378 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:51:56.325973   11378 out.go:203] 
	W1202 18:51:56.329095   11378 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:51:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:51:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:51:56.329117   11378 out.go:285] * 
	* 
	W1202 18:51:56.751841   11378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:51:56.754860   11378 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 14.905305ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003563954s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003288184s
addons_test.go:392: (dbg) Run:  kubectl --context addons-391119 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-391119 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-391119 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.931474372s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 ip
2025/12/02 18:52:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable registry --alsologtostderr -v=1: exit status 11 (339.880757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:23.493706   12384 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:23.493938   12384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:23.493966   12384 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:23.493987   12384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:23.494259   12384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:23.494558   12384 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:23.494982   12384 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:23.495019   12384 addons.go:622] checking whether the cluster is paused
	I1202 18:52:23.495163   12384 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:23.495192   12384 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:23.495759   12384 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:23.537400   12384 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:23.537452   12384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:23.558954   12384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:23.663943   12384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:23.664061   12384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:23.693274   12384 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:23.693303   12384 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:23.693343   12384 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:23.693348   12384 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:23.693352   12384 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:23.693356   12384 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:23.693360   12384 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:23.693367   12384 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:23.693370   12384 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:23.693377   12384 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:23.693385   12384 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:23.693389   12384 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:23.693392   12384 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:23.693395   12384 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:23.693398   12384 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:23.693403   12384 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:23.693406   12384 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:23.693410   12384 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:23.693413   12384 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:23.693416   12384 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:23.693422   12384 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:23.693426   12384 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:23.693432   12384 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:23.693437   12384 cri.go:89] found id: ""
	I1202 18:52:23.693493   12384 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:23.708988   12384 out.go:203] 
	W1202 18:52:23.711931   12384 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:23.711955   12384 out.go:285] * 
	* 
	W1202 18:52:23.716791   12384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:23.719727   12384 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.957193ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-391119
addons_test.go:332: (dbg) Run:  kubectl --context addons-391119 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.205163ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:53:17.904603   13904 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:53:17.904802   13904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:17.904833   13904 out.go:374] Setting ErrFile to fd 2...
	I1202 18:53:17.904854   13904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:17.905118   13904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:53:17.905484   13904 mustload.go:66] Loading cluster: addons-391119
	I1202 18:53:17.906068   13904 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:17.906111   13904 addons.go:622] checking whether the cluster is paused
	I1202 18:53:17.906247   13904 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:17.906281   13904 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:53:17.906783   13904 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:53:17.925362   13904 ssh_runner.go:195] Run: systemctl --version
	I1202 18:53:17.925414   13904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:53:17.942068   13904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:53:18.048157   13904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:53:18.048250   13904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:53:18.080032   13904 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:53:18.080054   13904 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:53:18.080059   13904 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:53:18.080064   13904 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:53:18.080068   13904 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:53:18.080072   13904 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:53:18.080076   13904 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:53:18.080079   13904 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:53:18.080083   13904 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:53:18.080089   13904 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:53:18.080093   13904 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:53:18.080096   13904 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:53:18.080100   13904 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:53:18.080103   13904 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:53:18.080107   13904 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:53:18.080117   13904 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:53:18.080123   13904 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:53:18.080128   13904 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:53:18.080131   13904 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:53:18.080135   13904 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:53:18.080139   13904 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:53:18.080143   13904 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:53:18.080146   13904 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:53:18.080149   13904 cri.go:89] found id: ""
	I1202 18:53:18.080204   13904 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:53:18.101555   13904 out.go:203] 
	W1202 18:53:18.104765   13904 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:53:18.104846   13904 out.go:285] * 
	* 
	W1202 18:53:18.110021   13904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:53:18.113005   13904 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-391119 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-391119 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-391119 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5a6688dd-f21f-4430-915e-35101dafcb10] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5a6688dd-f21f-4430-915e-35101dafcb10] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004147014s
I1202 18:52:45.174442    4470 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.23334261s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-391119 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-391119
helpers_test.go:243: (dbg) docker inspect addons-391119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41",
	        "Created": "2025-12-02T18:49:45.529726904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5891,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T18:49:45.594070391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/hostname",
	        "HostsPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/hosts",
	        "LogPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41-json.log",
	        "Name": "/addons-391119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-391119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-391119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41",
	                "LowerDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-391119",
	                "Source": "/var/lib/docker/volumes/addons-391119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-391119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-391119",
	                "name.minikube.sigs.k8s.io": "addons-391119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f72e2bc4f4b289a730b02700b7a47804af9018d946dcc264a7de0cc63184978",
	            "SandboxKey": "/var/run/docker/netns/9f72e2bc4f4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-391119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:3a:e7:d1:50:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a219337ad6b1bd266ea3e5b9061fe73db277be6ee58b370bfba7d0e5972d90e1",
	                    "EndpointID": "901b192e056705a0076af774de77bd44b8dc6af1c1247a61660a747cab5eef4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-391119",
	                        "01bfa6b917fd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-391119 -n addons-391119
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-391119 logs -n 25: (1.443485564s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-936869                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-936869 │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ start   │ --download-only -p binary-mirror-279600 --alsologtostderr --binary-mirror http://127.0.0.1:42717 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279600   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ delete  │ -p binary-mirror-279600                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279600   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ addons  │ enable dashboard -p addons-391119                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-391119                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ start   │ -p addons-391119 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:51 UTC │
	│ addons  │ addons-391119 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:51 UTC │                     │
	│ addons  │ addons-391119 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-391119 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ ip      │ addons-391119 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │ 02 Dec 25 18:52 UTC │
	│ addons  │ addons-391119 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ ssh     │ addons-391119 ssh cat /opt/local-path-provisioner/pvc-d9b26da9-ba59-4d1e-8d9e-2c2373daa6ce_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │ 02 Dec 25 18:52 UTC │
	│ addons  │ addons-391119 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ ssh     │ addons-391119 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ addons-391119 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:53 UTC │                     │
	│ addons  │ addons-391119 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:53 UTC │                     │
	│ addons  │ addons-391119 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:53 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-391119                                                                                                                                                                                                                                                                                                                                                                                           │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:53 UTC │ 02 Dec 25 18:53 UTC │
	│ addons  │ addons-391119 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:53 UTC │                     │
	│ ip      │ addons-391119 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:54 UTC │ 02 Dec 25 18:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:49:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:49:21.151560    5489 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:49:21.151692    5489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:21.151702    5489 out.go:374] Setting ErrFile to fd 2...
	I1202 18:49:21.151708    5489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:21.151963    5489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:49:21.152401    5489 out.go:368] Setting JSON to false
	I1202 18:49:21.153129    5489 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1900,"bootTime":1764699462,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:49:21.153193    5489 start.go:143] virtualization:  
	I1202 18:49:21.158367    5489 out.go:179] * [addons-391119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 18:49:21.161371    5489 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 18:49:21.161480    5489 notify.go:221] Checking for updates...
	I1202 18:49:21.167211    5489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:49:21.169991    5489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:49:21.172807    5489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:49:21.175629    5489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 18:49:21.178527    5489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 18:49:21.181482    5489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:49:21.220770    5489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:49:21.220887    5489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:21.303333    5489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:21.29235335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:21.303439    5489 docker.go:319] overlay module found
	I1202 18:49:21.306650    5489 out.go:179] * Using the docker driver based on user configuration
	I1202 18:49:21.309431    5489 start.go:309] selected driver: docker
	I1202 18:49:21.309460    5489 start.go:927] validating driver "docker" against <nil>
	I1202 18:49:21.309482    5489 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 18:49:21.310303    5489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:21.404424    5489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:21.395321691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:21.404571    5489 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 18:49:21.404777    5489 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:49:21.407551    5489 out.go:179] * Using Docker driver with root privileges
	I1202 18:49:21.410265    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:49:21.410326    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:49:21.410334    5489 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 18:49:21.410407    5489 start.go:353] cluster config:
	{Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 18:49:21.413500    5489 out.go:179] * Starting "addons-391119" primary control-plane node in "addons-391119" cluster
	I1202 18:49:21.416280    5489 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 18:49:21.419162    5489 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 18:49:21.421980    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:21.422019    5489 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 18:49:21.422028    5489 cache.go:65] Caching tarball of preloaded images
	I1202 18:49:21.422122    5489 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 18:49:21.422134    5489 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 18:49:21.422488    5489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json ...
	I1202 18:49:21.422509    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json: {Name:mk35d744d67e94b85876ec704acb2daf7dc5017b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:21.422662    5489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 18:49:21.440793    5489 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 18:49:21.440916    5489 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 18:49:21.440934    5489 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 18:49:21.440938    5489 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 18:49:21.440945    5489 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 18:49:21.440950    5489 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 18:49:38.957451    5489 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 18:49:38.957489    5489 cache.go:243] Successfully downloaded all kic artifacts
	I1202 18:49:38.957535    5489 start.go:360] acquireMachinesLock for addons-391119: {Name:mkd9ba4106d5f0301c0e1410c2737c2451b7b344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 18:49:38.957680    5489 start.go:364] duration metric: took 120.908µs to acquireMachinesLock for "addons-391119"
	I1202 18:49:38.957770    5489 start.go:93] Provisioning new machine with config: &{Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 18:49:38.957854    5489 start.go:125] createHost starting for "" (driver="docker")
	I1202 18:49:38.961425    5489 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 18:49:38.961685    5489 start.go:159] libmachine.API.Create for "addons-391119" (driver="docker")
	I1202 18:49:38.961723    5489 client.go:173] LocalClient.Create starting
	I1202 18:49:38.961843    5489 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem
	I1202 18:49:39.247034    5489 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem
	I1202 18:49:39.426048    5489 cli_runner.go:164] Run: docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 18:49:39.441560    5489 cli_runner.go:211] docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 18:49:39.441683    5489 network_create.go:284] running [docker network inspect addons-391119] to gather additional debugging logs...
	I1202 18:49:39.441702    5489 cli_runner.go:164] Run: docker network inspect addons-391119
	W1202 18:49:39.456818    5489 cli_runner.go:211] docker network inspect addons-391119 returned with exit code 1
	I1202 18:49:39.456854    5489 network_create.go:287] error running [docker network inspect addons-391119]: docker network inspect addons-391119: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-391119 not found
	I1202 18:49:39.456868    5489 network_create.go:289] output of [docker network inspect addons-391119]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-391119 not found
	
	** /stderr **
	I1202 18:49:39.456956    5489 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 18:49:39.474269    5489 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001af4660}
	I1202 18:49:39.474321    5489 network_create.go:124] attempt to create docker network addons-391119 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 18:49:39.474382    5489 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-391119 addons-391119
	I1202 18:49:39.537327    5489 network_create.go:108] docker network addons-391119 192.168.49.0/24 created
	I1202 18:49:39.537368    5489 kic.go:121] calculated static IP "192.168.49.2" for the "addons-391119" container
	I1202 18:49:39.537450    5489 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 18:49:39.553383    5489 cli_runner.go:164] Run: docker volume create addons-391119 --label name.minikube.sigs.k8s.io=addons-391119 --label created_by.minikube.sigs.k8s.io=true
	I1202 18:49:39.579393    5489 oci.go:103] Successfully created a docker volume addons-391119
	I1202 18:49:39.579486    5489 cli_runner.go:164] Run: docker run --rm --name addons-391119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --entrypoint /usr/bin/test -v addons-391119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 18:49:41.343206    5489 cli_runner.go:217] Completed: docker run --rm --name addons-391119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --entrypoint /usr/bin/test -v addons-391119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.763672865s)
	I1202 18:49:41.343247    5489 oci.go:107] Successfully prepared a docker volume addons-391119
	I1202 18:49:41.343295    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:41.343304    5489 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 18:49:41.343363    5489 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-391119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 18:49:45.458717    5489 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-391119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.115299195s)
	I1202 18:49:45.458753    5489 kic.go:203] duration metric: took 4.11544498s to extract preloaded images to volume ...
	W1202 18:49:45.458891    5489 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 18:49:45.458997    5489 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 18:49:45.515084    5489 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-391119 --name addons-391119 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-391119 --network addons-391119 --ip 192.168.49.2 --volume addons-391119:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 18:49:45.844880    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Running}}
	I1202 18:49:45.868171    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:45.893429    5489 cli_runner.go:164] Run: docker exec addons-391119 stat /var/lib/dpkg/alternatives/iptables
	I1202 18:49:45.944452    5489 oci.go:144] the created container "addons-391119" has a running status.
	I1202 18:49:45.944477    5489 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa...
	I1202 18:49:46.428698    5489 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 18:49:46.447607    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:46.469184    5489 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 18:49:46.469203    5489 kic_runner.go:114] Args: [docker exec --privileged addons-391119 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 18:49:46.509975    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:46.527350    5489 machine.go:94] provisionDockerMachine start ...
	I1202 18:49:46.527441    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:46.544504    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:46.544833    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:46.544843    5489 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 18:49:46.545539    5489 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 18:49:49.697126    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-391119
	
	I1202 18:49:49.697153    5489 ubuntu.go:182] provisioning hostname "addons-391119"
	I1202 18:49:49.697265    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:49.714460    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:49.714767    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:49.714782    5489 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-391119 && echo "addons-391119" | sudo tee /etc/hostname
	I1202 18:49:49.870646    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-391119
	
	I1202 18:49:49.870720    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:49.887924    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:49.888228    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:49.888251    5489 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-391119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-391119/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-391119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 18:49:50.038385    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 18:49:50.038412    5489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 18:49:50.038446    5489 ubuntu.go:190] setting up certificates
	I1202 18:49:50.038460    5489 provision.go:84] configureAuth start
	I1202 18:49:50.038523    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:50.067808    5489 provision.go:143] copyHostCerts
	I1202 18:49:50.067897    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 18:49:50.068033    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 18:49:50.068162    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 18:49:50.068302    5489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.addons-391119 san=[127.0.0.1 192.168.49.2 addons-391119 localhost minikube]
	I1202 18:49:50.427218    5489 provision.go:177] copyRemoteCerts
	I1202 18:49:50.427283    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 18:49:50.427326    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.446993    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:50.552940    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 18:49:50.569096    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 18:49:50.586441    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 18:49:50.602830    5489 provision.go:87] duration metric: took 564.348464ms to configureAuth
	I1202 18:49:50.602901    5489 ubuntu.go:206] setting minikube options for container-runtime
	I1202 18:49:50.603129    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:49:50.603264    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.619804    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:50.620109    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:50.620121    5489 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 18:49:50.913721    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 18:49:50.913743    5489 machine.go:97] duration metric: took 4.386374668s to provisionDockerMachine
	I1202 18:49:50.913755    5489 client.go:176] duration metric: took 11.952023678s to LocalClient.Create
	I1202 18:49:50.913768    5489 start.go:167] duration metric: took 11.952083918s to libmachine.API.Create "addons-391119"
	I1202 18:49:50.913775    5489 start.go:293] postStartSetup for "addons-391119" (driver="docker")
	I1202 18:49:50.913785    5489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 18:49:50.913854    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 18:49:50.913908    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.931968    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.034154    5489 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 18:49:51.037647    5489 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 18:49:51.037696    5489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 18:49:51.037707    5489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 18:49:51.037775    5489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 18:49:51.037808    5489 start.go:296] duration metric: took 124.026512ms for postStartSetup
	I1202 18:49:51.038108    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:51.054407    5489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json ...
	I1202 18:49:51.054683    5489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 18:49:51.054732    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.072070    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.174670    5489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 18:49:51.179336    5489 start.go:128] duration metric: took 12.221466489s to createHost
	I1202 18:49:51.179411    5489 start.go:83] releasing machines lock for "addons-391119", held for 12.221660406s
	I1202 18:49:51.179527    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:51.196987    5489 ssh_runner.go:195] Run: cat /version.json
	I1202 18:49:51.197015    5489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 18:49:51.197034    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.197078    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.217953    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.225771    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.412949    5489 ssh_runner.go:195] Run: systemctl --version
	I1202 18:49:51.418955    5489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 18:49:51.452458    5489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 18:49:51.456480    5489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 18:49:51.456555    5489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 18:49:51.482764    5489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 18:49:51.482835    5489 start.go:496] detecting cgroup driver to use...
	I1202 18:49:51.482875    5489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 18:49:51.482929    5489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 18:49:51.499582    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 18:49:51.512965    5489 docker.go:218] disabling cri-docker service (if available) ...
	I1202 18:49:51.513060    5489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 18:49:51.530359    5489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 18:49:51.549199    5489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 18:49:51.664754    5489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 18:49:51.786676    5489 docker.go:234] disabling docker service ...
	I1202 18:49:51.786760    5489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 18:49:51.806700    5489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 18:49:51.820254    5489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 18:49:51.950358    5489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 18:49:52.069421    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 18:49:52.083030    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 18:49:52.097906    5489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 18:49:52.097988    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.106848    5489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 18:49:52.106967    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.116120    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.124454    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.132705    5489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 18:49:52.140694    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.149162    5489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.162605    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.171030    5489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 18:49:52.178104    5489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 18:49:52.178191    5489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 18:49:52.191804    5489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 18:49:52.199257    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:49:52.317792    5489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 18:49:52.497329    5489 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 18:49:52.497413    5489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 18:49:52.500935    5489 start.go:564] Will wait 60s for crictl version
	I1202 18:49:52.500993    5489 ssh_runner.go:195] Run: which crictl
	I1202 18:49:52.504226    5489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 18:49:52.530484    5489 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 18:49:52.530614    5489 ssh_runner.go:195] Run: crio --version
	I1202 18:49:52.558221    5489 ssh_runner.go:195] Run: crio --version
	I1202 18:49:52.592715    5489 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 18:49:52.595477    5489 cli_runner.go:164] Run: docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 18:49:52.611211    5489 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 18:49:52.614768    5489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 18:49:52.623923    5489 kubeadm.go:884] updating cluster {Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 18:49:52.624033    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:52.624093    5489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:49:52.657001    5489 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:49:52.657025    5489 crio.go:433] Images already preloaded, skipping extraction
	I1202 18:49:52.657080    5489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:49:52.683608    5489 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:49:52.683631    5489 cache_images.go:86] Images are preloaded, skipping loading
	I1202 18:49:52.683639    5489 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 18:49:52.683724    5489 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-391119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 18:49:52.683804    5489 ssh_runner.go:195] Run: crio config
	I1202 18:49:52.746863    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:49:52.746889    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:49:52.746911    5489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 18:49:52.746943    5489 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-391119 NodeName:addons-391119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 18:49:52.747094    5489 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-391119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 18:49:52.747179    5489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 18:49:52.754756    5489 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 18:49:52.754864    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 18:49:52.762161    5489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 18:49:52.774347    5489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 18:49:52.786860    5489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1202 18:49:52.799613    5489 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 18:49:52.802995    5489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 18:49:52.812175    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:49:52.930581    5489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:49:52.944910    5489 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119 for IP: 192.168.49.2
	I1202 18:49:52.944928    5489 certs.go:195] generating shared ca certs ...
	I1202 18:49:52.944943    5489 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:52.945062    5489 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 18:49:53.003181    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt ...
	I1202 18:49:53.003212    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt: {Name:mkd5b1a9f0fad7d0ecc11f2846b0a7f559226cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.003384    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key ...
	I1202 18:49:53.003399    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key: {Name:mk8ac871d12285a41ebadf8ebc95b8c667ac34ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.003475    5489 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 18:49:53.082891    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt ...
	I1202 18:49:53.082931    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt: {Name:mk22192dbf2731a3b3c66a7552e99ff805da04a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.083107    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key ...
	I1202 18:49:53.083119    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key: {Name:mkdcb754b25a4ed546d2e13cf9eb82c336b19234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.083195    5489 certs.go:257] generating profile certs ...
	I1202 18:49:53.083253    5489 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key
	I1202 18:49:53.083269    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt with IP's: []
	I1202 18:49:53.333281    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt ...
	I1202 18:49:53.333311    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: {Name:mkc369093f7111c2a19e4c8ebab715eb936404cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.333484    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key ...
	I1202 18:49:53.333497    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key: {Name:mkc8bff54b56ba34d43f581da01a9dd0989cd180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.333585    5489 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb
	I1202 18:49:53.333603    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 18:49:53.686212    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb ...
	I1202 18:49:53.686242    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb: {Name:mkd3317e6b5fa90c4661316fcf9e65c07fa3648c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.686423    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb ...
	I1202 18:49:53.686438    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb: {Name:mkfdd3ae873064a062b9c5e5acfce475eb3ec12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.686521    5489 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt
	I1202 18:49:53.686602    5489 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key
	I1202 18:49:53.686664    5489 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key
	I1202 18:49:53.686683    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt with IP's: []
	I1202 18:49:53.761580    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt ...
	I1202 18:49:53.761607    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt: {Name:mkf899ff3aa2aa4efa224c71c03bb9e29baa4305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.761768    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key ...
	I1202 18:49:53.761779    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key: {Name:mk158bcb5647a08b7e4ef0c069c9cb4748caa22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.761959    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 18:49:53.761998    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 18:49:53.762028    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 18:49:53.762060    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 18:49:53.762635    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 18:49:53.780269    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 18:49:53.799769    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 18:49:53.816804    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 18:49:53.833558    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 18:49:53.850488    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 18:49:53.869689    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 18:49:53.886684    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 18:49:53.903508    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 18:49:53.920143    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 18:49:53.931931    5489 ssh_runner.go:195] Run: openssl version
	I1202 18:49:53.937989    5489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 18:49:53.946017    5489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.949332    5489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.949391    5489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.990931    5489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 18:49:53.998777    5489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 18:49:54.002240    5489 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 18:49:54.002293    5489 kubeadm.go:401] StartCluster: {Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:49:54.002372    5489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:49:54.002439    5489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:49:54.034458    5489 cri.go:89] found id: ""
	I1202 18:49:54.034543    5489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 18:49:54.042910    5489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 18:49:54.050776    5489 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 18:49:54.050869    5489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 18:49:54.058786    5489 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 18:49:54.058805    5489 kubeadm.go:158] found existing configuration files:
	
	I1202 18:49:54.058859    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 18:49:54.066780    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 18:49:54.066851    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 18:49:54.077225    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 18:49:54.085202    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 18:49:54.085279    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 18:49:54.093137    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 18:49:54.101016    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 18:49:54.101090    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 18:49:54.108668    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 18:49:54.116185    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 18:49:54.116278    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 18:49:54.123287    5489 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 18:49:54.164380    5489 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 18:49:54.164446    5489 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 18:49:54.188139    5489 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 18:49:54.188219    5489 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 18:49:54.188260    5489 kubeadm.go:319] OS: Linux
	I1202 18:49:54.188310    5489 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 18:49:54.188362    5489 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 18:49:54.188413    5489 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 18:49:54.188463    5489 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 18:49:54.188515    5489 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 18:49:54.188566    5489 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 18:49:54.188615    5489 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 18:49:54.188665    5489 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 18:49:54.188714    5489 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 18:49:54.259497    5489 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 18:49:54.259632    5489 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 18:49:54.259756    5489 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 18:49:54.267224    5489 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 18:49:54.273761    5489 out.go:252]   - Generating certificates and keys ...
	I1202 18:49:54.273855    5489 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 18:49:54.273928    5489 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 18:49:54.923195    5489 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 18:49:55.128810    5489 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 18:49:55.401464    5489 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 18:49:56.012419    5489 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 18:49:56.410899    5489 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 18:49:56.411252    5489 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-391119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 18:49:56.674333    5489 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 18:49:56.674719    5489 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-391119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 18:49:57.842638    5489 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 18:49:58.271773    5489 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 18:49:58.314443    5489 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 18:49:58.314994    5489 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 18:49:58.498962    5489 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 18:49:58.761365    5489 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 18:50:00.740171    5489 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 18:50:00.930579    5489 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 18:50:01.408648    5489 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 18:50:01.409489    5489 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 18:50:01.412284    5489 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 18:50:01.415804    5489 out.go:252]   - Booting up control plane ...
	I1202 18:50:01.415914    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 18:50:01.415999    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 18:50:01.416070    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 18:50:01.432270    5489 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 18:50:01.432586    5489 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 18:50:01.441414    5489 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 18:50:01.445692    5489 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 18:50:01.445765    5489 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 18:50:01.582143    5489 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 18:50:01.582264    5489 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 18:50:02.581293    5489 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001205626s
	I1202 18:50:02.585927    5489 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 18:50:02.586023    5489 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 18:50:02.586334    5489 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 18:50:02.586424    5489 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 18:50:05.263166    5489 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.676256657s
	I1202 18:50:07.544411    5489 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.957227339s
	I1202 18:50:08.088575    5489 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50224687s
	I1202 18:50:08.125715    5489 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 18:50:08.641235    5489 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 18:50:08.655424    5489 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 18:50:08.655634    5489 kubeadm.go:319] [mark-control-plane] Marking the node addons-391119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 18:50:08.666468    5489 kubeadm.go:319] [bootstrap-token] Using token: njyjbc.wmlogeow2ifd8inq
	I1202 18:50:08.669342    5489 out.go:252]   - Configuring RBAC rules ...
	I1202 18:50:08.669469    5489 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 18:50:08.674331    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 18:50:08.682471    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 18:50:08.689231    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 18:50:08.693171    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 18:50:08.699540    5489 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 18:50:08.838421    5489 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 18:50:09.280619    5489 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 18:50:09.842896    5489 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 18:50:09.844119    5489 kubeadm.go:319] 
	I1202 18:50:09.844193    5489 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 18:50:09.844207    5489 kubeadm.go:319] 
	I1202 18:50:09.844284    5489 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 18:50:09.844288    5489 kubeadm.go:319] 
	I1202 18:50:09.844313    5489 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 18:50:09.844372    5489 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 18:50:09.844422    5489 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 18:50:09.844426    5489 kubeadm.go:319] 
	I1202 18:50:09.844480    5489 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 18:50:09.844483    5489 kubeadm.go:319] 
	I1202 18:50:09.844531    5489 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 18:50:09.844535    5489 kubeadm.go:319] 
	I1202 18:50:09.844587    5489 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 18:50:09.844662    5489 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 18:50:09.844730    5489 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 18:50:09.844735    5489 kubeadm.go:319] 
	I1202 18:50:09.844819    5489 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 18:50:09.844897    5489 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 18:50:09.844901    5489 kubeadm.go:319] 
	I1202 18:50:09.844986    5489 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token njyjbc.wmlogeow2ifd8inq \
	I1202 18:50:09.845089    5489 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b \
	I1202 18:50:09.845110    5489 kubeadm.go:319] 	--control-plane 
	I1202 18:50:09.845113    5489 kubeadm.go:319] 
	I1202 18:50:09.845199    5489 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 18:50:09.845203    5489 kubeadm.go:319] 
	I1202 18:50:09.845285    5489 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token njyjbc.wmlogeow2ifd8inq \
	I1202 18:50:09.845387    5489 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b 
	I1202 18:50:09.847691    5489 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1202 18:50:09.847913    5489 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 18:50:09.848027    5489 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 18:50:09.848042    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:50:09.848050    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:50:09.851085    5489 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 18:50:09.853919    5489 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 18:50:09.857513    5489 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 18:50:09.857529    5489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 18:50:09.871760    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 18:50:10.187180    5489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 18:50:10.187319    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:10.187405    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-391119 minikube.k8s.io/updated_at=2025_12_02T18_50_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=addons-391119 minikube.k8s.io/primary=true
	I1202 18:50:10.375280    5489 ops.go:34] apiserver oom_adj: -16
	I1202 18:50:10.375297    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:10.875870    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:11.375784    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:11.876189    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:12.376163    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:12.876195    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:13.375549    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:13.875607    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:14.376319    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:14.483282    5489 kubeadm.go:1114] duration metric: took 4.296015962s to wait for elevateKubeSystemPrivileges
	I1202 18:50:14.483316    5489 kubeadm.go:403] duration metric: took 20.481030975s to StartCluster
	I1202 18:50:14.483334    5489 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:50:14.483455    5489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:50:14.483881    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:50:14.484100    5489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 18:50:14.484259    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 18:50:14.484565    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:50:14.484663    5489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 18:50:14.484779    5489 addons.go:70] Setting yakd=true in profile "addons-391119"
	I1202 18:50:14.484803    5489 addons.go:239] Setting addon yakd=true in "addons-391119"
	I1202 18:50:14.484832    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.485403    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.485946    5489 addons.go:70] Setting metrics-server=true in profile "addons-391119"
	I1202 18:50:14.485964    5489 addons.go:239] Setting addon metrics-server=true in "addons-391119"
	I1202 18:50:14.485984    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.486393    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.486522    5489 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-391119"
	I1202 18:50:14.486540    5489 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-391119"
	I1202 18:50:14.486557    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.487036    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490712    5489 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-391119"
	I1202 18:50:14.491036    5489 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-391119"
	I1202 18:50:14.491139    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.493547    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490867    5489 addons.go:70] Setting cloud-spanner=true in profile "addons-391119"
	I1202 18:50:14.497817    5489 addons.go:239] Setting addon cloud-spanner=true in "addons-391119"
	I1202 18:50:14.497893    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.498381    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490877    5489 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-391119"
	I1202 18:50:14.490888    5489 addons.go:70] Setting default-storageclass=true in profile "addons-391119"
	I1202 18:50:14.490892    5489 addons.go:70] Setting gcp-auth=true in profile "addons-391119"
	I1202 18:50:14.490895    5489 addons.go:70] Setting ingress=true in profile "addons-391119"
	I1202 18:50:14.490898    5489 addons.go:70] Setting ingress-dns=true in profile "addons-391119"
	I1202 18:50:14.490901    5489 addons.go:70] Setting inspektor-gadget=true in profile "addons-391119"
	I1202 18:50:14.490935    5489 out.go:179] * Verifying Kubernetes components...
	I1202 18:50:14.490952    5489 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-391119"
	I1202 18:50:14.490956    5489 addons.go:70] Setting registry=true in profile "addons-391119"
	I1202 18:50:14.490962    5489 addons.go:70] Setting registry-creds=true in profile "addons-391119"
	I1202 18:50:14.490968    5489 addons.go:70] Setting storage-provisioner=true in profile "addons-391119"
	I1202 18:50:14.490975    5489 addons.go:70] Setting volumesnapshots=true in profile "addons-391119"
	I1202 18:50:14.490986    5489 addons.go:70] Setting volcano=true in profile "addons-391119"
	I1202 18:50:14.498706    5489 addons.go:239] Setting addon volcano=true in "addons-391119"
	I1202 18:50:14.498734    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.502902    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.504261    5489 addons.go:239] Setting addon inspektor-gadget=true in "addons-391119"
	I1202 18:50:14.504330    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.504827    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.541882    5489 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-391119"
	I1202 18:50:14.542286    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.543068    5489 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-391119"
	I1202 18:50:14.543104    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.543543    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.565619    5489 addons.go:239] Setting addon registry=true in "addons-391119"
	I1202 18:50:14.565761    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.566236    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.581854    5489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-391119"
	I1202 18:50:14.582203    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.589320    5489 addons.go:239] Setting addon registry-creds=true in "addons-391119"
	I1202 18:50:14.589383    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.589902    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.605200    5489 mustload.go:66] Loading cluster: addons-391119
	I1202 18:50:14.605423    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:50:14.605715    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.621853    5489 addons.go:239] Setting addon storage-provisioner=true in "addons-391119"
	I1202 18:50:14.621901    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.622368    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.632883    5489 addons.go:239] Setting addon ingress=true in "addons-391119"
	I1202 18:50:14.632937    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.633403    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.637834    5489 addons.go:239] Setting addon volumesnapshots=true in "addons-391119"
	I1202 18:50:14.637891    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.638382    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.641057    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:50:14.655814    5489 addons.go:239] Setting addon ingress-dns=true in "addons-391119"
	I1202 18:50:14.656292    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.656826    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.677517    5489 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 18:50:14.687544    5489 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 18:50:14.708112    5489 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 18:50:14.720792    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 18:50:14.720881    5489 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 18:50:14.749977    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.721309    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 18:50:14.759093    5489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 18:50:14.759249    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.721325    5489 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 18:50:14.793769    5489 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1202 18:50:14.793956    5489 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 18:50:14.721401    5489 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 18:50:14.794413    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 18:50:14.794479    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.806598    5489 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 18:50:14.806627    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 18:50:14.806686    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.813621    5489 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 18:50:14.814749    5489 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-391119"
	I1202 18:50:14.814800    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.815228    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.823828    5489 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 18:50:14.823855    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 18:50:14.823924    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.836345    5489 addons.go:239] Setting addon default-storageclass=true in "addons-391119"
	I1202 18:50:14.840746    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.841324    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.842597    5489 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 18:50:14.842612    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 18:50:14.842656    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.867012    5489 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 18:50:14.872269    5489 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 18:50:14.872328    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 18:50:14.872433    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.887178    5489 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 18:50:14.892532    5489 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 18:50:14.892643    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 18:50:14.893074    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.900250    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 18:50:14.900336    5489 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 18:50:14.900346    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 18:50:14.900399    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.916909    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 18:50:14.916931    5489 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 18:50:14.916996    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.944361    5489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 18:50:14.947463    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 18:50:14.947610    5489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:50:14.947624    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 18:50:14.947690    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.024002    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 18:50:15.024815    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 18:50:15.032798    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 18:50:15.034973    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 18:50:15.036063    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.047433    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 18:50:15.047610    5489 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 18:50:15.047649    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:15.053780    5489 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 18:50:15.053807    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 18:50:15.053876    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.059935    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:15.060079    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 18:50:15.067565    5489 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 18:50:15.067589    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 18:50:15.067654    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.070918    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 18:50:15.075695    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 18:50:15.079309    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 18:50:15.079394    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 18:50:15.079459    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.118705    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.119766    5489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 18:50:15.119784    5489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 18:50:15.119846    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.119899    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.120573    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.139654    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.139756    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.142697    5489 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 18:50:15.147039    5489 out.go:179]   - Using image docker.io/busybox:stable
	I1202 18:50:15.151426    5489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 18:50:15.151451    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 18:50:15.151518    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.172029    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.179893    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.185814    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.229470    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.253768    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.254713    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.255391    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	W1202 18:50:15.258012    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258041    5489 retry.go:31] will retry after 266.594916ms: ssh: handshake failed: EOF
	W1202 18:50:15.258115    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258124    5489 retry.go:31] will retry after 215.927281ms: ssh: handshake failed: EOF
	W1202 18:50:15.258162    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258177    5489 retry.go:31] will retry after 364.277984ms: ssh: handshake failed: EOF
	I1202 18:50:15.262220    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	W1202 18:50:15.266337    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.266366    5489 retry.go:31] will retry after 359.391623ms: ssh: handshake failed: EOF
	I1202 18:50:15.268055    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.272457    5489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:50:15.587083    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 18:50:15.743964    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 18:50:15.791340    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 18:50:15.857537    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 18:50:15.857611    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 18:50:15.887462    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:50:15.923060    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 18:50:15.928698    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 18:50:15.951419    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 18:50:15.969005    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 18:50:15.985451    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 18:50:15.985526    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 18:50:15.994364    5489 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 18:50:15.994433    5489 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 18:50:16.039950    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 18:50:16.040026    5489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 18:50:16.085598    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 18:50:16.085692    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 18:50:16.109584    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 18:50:16.124733    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 18:50:16.124753    5489 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 18:50:16.129246    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 18:50:16.129266    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 18:50:16.176722    5489 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 18:50:16.176808    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 18:50:16.203143    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 18:50:16.205937    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 18:50:16.206016    5489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 18:50:16.219746    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 18:50:16.219818    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 18:50:16.236009    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 18:50:16.236081    5489 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 18:50:16.312885    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 18:50:16.351861    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 18:50:16.351939    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 18:50:16.363488    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 18:50:16.363568    5489 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 18:50:16.378293    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 18:50:16.378368    5489 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 18:50:16.440891    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 18:50:16.514248    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 18:50:16.514324    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 18:50:16.517032    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 18:50:16.517099    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 18:50:16.549106    5489 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:16.549179    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 18:50:16.635140    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 18:50:16.655075    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 18:50:16.655150    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 18:50:16.682750    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:16.847895    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 18:50:16.847968    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 18:50:16.897511    5489 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.625019097s)
	I1202 18:50:16.897638    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.310525462s)
	I1202 18:50:16.897836    5489 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.87299565s)
	I1202 18:50:16.897953    5489 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 18:50:16.899418    5489 node_ready.go:35] waiting up to 6m0s for node "addons-391119" to be "Ready" ...
	I1202 18:50:17.125383    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 18:50:17.125449    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 18:50:17.333014    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 18:50:17.333087    5489 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 18:50:17.403088    5489 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-391119" context rescaled to 1 replicas
	I1202 18:50:17.615243    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 18:50:17.615313    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 18:50:17.760927    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 18:50:17.760995    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 18:50:17.878235    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 18:50:17.878312    5489 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 18:50:18.082366    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1202 18:50:18.956574    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:19.746765    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.002717356s)
	I1202 18:50:19.746814    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.95541696s)
	I1202 18:50:19.746884    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.859404158s)
	I1202 18:50:19.746922    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.823791869s)
	I1202 18:50:20.623652    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.672152267s)
	I1202 18:50:20.623803    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.654735342s)
	I1202 18:50:20.623840    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.514163358s)
	I1202 18:50:20.623857    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.420695722s)
	I1202 18:50:20.623911    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.310956263s)
	I1202 18:50:20.624411    5489 addons.go:495] Verifying addon metrics-server=true in "addons-391119"
	I1202 18:50:20.623933    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.182971127s)
	I1202 18:50:20.624423    5489 addons.go:495] Verifying addon registry=true in "addons-391119"
	I1202 18:50:20.623960    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.988749592s)
	I1202 18:50:20.624849    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.696088232s)
	I1202 18:50:20.624893    5489 addons.go:495] Verifying addon ingress=true in "addons-391119"
	I1202 18:50:20.624034    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.941212597s)
	W1202 18:50:20.629131    5489 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 18:50:20.629166    5489 retry.go:31] will retry after 223.057158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 18:50:20.630013    5489 out.go:179] * Verifying registry addon...
	I1202 18:50:20.630072    5489 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-391119 service yakd-dashboard -n yakd-dashboard
	
	I1202 18:50:20.631875    5489 out.go:179] * Verifying ingress addon...
	I1202 18:50:20.636566    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 18:50:20.637343    5489 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 18:50:20.647608    5489 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 18:50:20.647627    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:20.647922    5489 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 18:50:20.647942    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:20.650205    5489 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 18:50:20.852930    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:20.976900    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.894440669s)
	I1202 18:50:20.976934    5489 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-391119"
	I1202 18:50:20.979927    5489 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 18:50:20.984451    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 18:50:20.991633    5489 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 18:50:20.991701    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:21.141514    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:21.142457    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:21.402603    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:21.487846    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:21.640427    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:21.640790    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:21.988282    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:22.141234    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:22.141368    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:22.488247    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:22.503184    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 18:50:22.503330    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:22.521446    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:22.634611    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 18:50:22.641868    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:22.642228    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:22.649507    5489 addons.go:239] Setting addon gcp-auth=true in "addons-391119"
	I1202 18:50:22.649551    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:22.650022    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:22.667902    5489 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 18:50:22.667956    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:22.685904    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:22.987436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:23.140491    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:23.140634    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:23.403414    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:23.487759    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:23.640704    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:23.640986    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:23.649127    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796096203s)
	I1202 18:50:23.652361    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:23.655166    5489 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 18:50:23.658064    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 18:50:23.658092    5489 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 18:50:23.672073    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 18:50:23.672137    5489 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 18:50:23.684516    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 18:50:23.684536    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 18:50:23.700274    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 18:50:23.987820    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:24.145903    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:24.146683    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:24.212984    5489 addons.go:495] Verifying addon gcp-auth=true in "addons-391119"
	I1202 18:50:24.215979    5489 out.go:179] * Verifying gcp-auth addon...
	I1202 18:50:24.219588    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 18:50:24.244837    5489 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 18:50:24.244861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:24.487237    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:24.640567    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:24.641002    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:24.722770    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:24.987458    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:25.140748    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:25.140920    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:25.222587    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:25.488360    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:25.641625    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:25.641808    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:25.722751    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:25.902678    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:25.987650    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:26.139765    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:26.140846    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:26.222615    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:26.488096    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:26.640542    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:26.641056    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:26.722970    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:26.993397    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:27.140152    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:27.140327    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:27.222861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:27.487703    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:27.640003    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:27.640927    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:27.722600    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:27.905297    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:27.988359    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:28.141159    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:28.141306    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:28.222968    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:28.487745    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:28.640754    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:28.640893    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:28.722816    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:28.987900    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:29.140224    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:29.140262    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:29.222937    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:29.488049    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:29.640593    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:29.640743    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:29.722458    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:29.987773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:30.140270    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:30.143122    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:30.222908    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:30.402309    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:30.488122    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:30.640341    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:30.640474    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:30.723274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:30.988404    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:31.141353    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:31.141927    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:31.222261    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:31.487720    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:31.639721    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:31.640352    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:31.722822    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:31.988447    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:32.139383    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:32.139929    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:32.223094    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:32.403062    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:32.488141    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:32.640211    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:32.640442    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:32.723059    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:32.987204    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:33.140366    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:33.140675    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:33.223217    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:33.488259    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:33.642738    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:33.643262    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:33.722830    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:33.987156    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:34.140499    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:34.140907    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:34.222945    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:34.487726    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:34.640608    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:34.640754    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:34.722688    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:34.902721    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:34.987363    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:35.140524    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:35.140698    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:35.222383    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:35.487574    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:35.640667    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:35.641222    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:35.722966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:35.990299    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:36.140843    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:36.141150    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:36.224317    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:36.487857    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:36.639538    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:36.640842    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:36.722418    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:36.903070    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:36.987699    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:37.139317    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:37.140235    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:37.223275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:37.487847    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:37.639389    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:37.640437    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:37.723385    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:37.987854    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:38.141113    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:38.141444    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:38.223331    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:38.488137    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:38.640338    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:38.641091    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:38.722755    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:38.988112    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:39.140222    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:39.140272    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:39.223125    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:39.402877    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:39.487820    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:39.641121    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:39.641389    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:39.722927    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:39.988100    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:40.141084    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:40.141713    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:40.222747    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:40.488107    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:40.640406    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:40.640662    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:40.723373    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:40.987981    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:41.139817    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:41.140155    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:41.223127    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:41.403474    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:41.488592    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:41.640394    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:41.640549    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:41.722295    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:41.987849    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:42.140918    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:42.141704    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:42.222906    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:42.487833    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:42.639639    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:42.640973    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:42.723133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:42.988070    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:43.140130    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:43.140407    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:43.223519    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:43.487767    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:43.640596    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:43.640804    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:43.722602    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:43.903306    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:43.988189    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:44.140608    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:44.140680    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:44.223017    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:44.487714    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:44.639642    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:44.641007    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:44.723057    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:44.988278    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:45.142164    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:45.142609    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:45.224697    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:45.488057    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:45.640133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:45.640415    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:45.723274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:45.903512    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:45.988089    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:46.140387    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:46.140510    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:46.222701    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:46.487491    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:46.639861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:46.641279    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:46.723135    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:46.988116    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:47.140827    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:47.141245    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:47.222977    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:47.487845    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:47.640718    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:47.640805    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:47.722560    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:47.904069    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:47.988263    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:48.140885    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:48.141365    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:48.223155    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:48.487837    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:48.640210    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:48.640565    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:48.722487    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:48.988059    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:49.140115    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:49.140179    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:49.223014    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:49.487876    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:49.641015    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:49.641137    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:49.723264    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:49.987933    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:50.140847    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:50.141019    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:50.222708    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:50.402684    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:50.487916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:50.639651    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:50.640776    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:50.722708    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:50.988294    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:51.140916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:51.141018    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:51.224328    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:51.487738    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:51.639618    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:51.640778    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:51.722664    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:51.987409    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:52.139496    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:52.141009    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:52.222778    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:52.487521    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:52.640979    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:52.641095    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:52.722966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:52.903072    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:52.987941    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:53.140664    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:53.140711    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:53.223255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:53.488251    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:53.640558    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:53.640690    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:53.722551    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:53.987574    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:54.139595    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:54.140424    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:54.223331    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:54.487738    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:54.639952    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:54.641867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:54.722773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:54.987886    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:55.141702    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:55.142420    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:55.224522    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:55.403686    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:55.487521    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:55.663776    5489 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 18:50:55.663892    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:55.678692    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:55.724279    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:55.924752    5489 node_ready.go:49] node "addons-391119" is "Ready"
	I1202 18:50:55.924838    5489 node_ready.go:38] duration metric: took 39.025077955s for node "addons-391119" to be "Ready" ...
	I1202 18:50:55.924868    5489 api_server.go:52] waiting for apiserver process to appear ...
	I1202 18:50:55.924954    5489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:50:55.967154    5489 api_server.go:72] duration metric: took 41.483013719s to wait for apiserver process to appear ...
	I1202 18:50:55.967209    5489 api_server.go:88] waiting for apiserver healthz status ...
	I1202 18:50:55.967239    5489 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 18:50:56.004124    5489 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 18:50:56.005237    5489 api_server.go:141] control plane version: v1.34.2
	I1202 18:50:56.005267    5489 api_server.go:131] duration metric: took 38.050894ms to wait for apiserver health ...
	I1202 18:50:56.005281    5489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 18:50:56.019140    5489 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 18:50:56.019171    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:56.027654    5489 system_pods.go:59] 19 kube-system pods found
	I1202 18:50:56.027704    5489 system_pods.go:61] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.027711    5489 system_pods.go:61] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending
	I1202 18:50:56.027723    5489 system_pods.go:61] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending
	I1202 18:50:56.027735    5489 system_pods.go:61] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending
	I1202 18:50:56.027739    5489 system_pods.go:61] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.027743    5489 system_pods.go:61] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.027760    5489 system_pods.go:61] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.027765    5489 system_pods.go:61] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.027778    5489 system_pods.go:61] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.027783    5489 system_pods.go:61] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.027788    5489 system_pods.go:61] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.027794    5489 system_pods.go:61] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.027803    5489 system_pods.go:61] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending
	I1202 18:50:56.027816    5489 system_pods.go:61] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending
	I1202 18:50:56.027827    5489 system_pods.go:61] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.027844    5489 system_pods.go:61] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending
	I1202 18:50:56.027857    5489 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.027867    5489 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending
	I1202 18:50:56.027873    5489 system_pods.go:61] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending
	I1202 18:50:56.027879    5489 system_pods.go:74] duration metric: took 22.592436ms to wait for pod list to return data ...
	I1202 18:50:56.027891    5489 default_sa.go:34] waiting for default service account to be created ...
	I1202 18:50:56.039291    5489 default_sa.go:45] found service account: "default"
	I1202 18:50:56.039353    5489 default_sa.go:55] duration metric: took 11.452454ms for default service account to be created ...
	I1202 18:50:56.039417    5489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 18:50:56.050531    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.050581    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.050591    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.050596    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending
	I1202 18:50:56.050600    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending
	I1202 18:50:56.050604    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.050609    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.050616    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.050621    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.050635    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.050640    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.050645    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.050664    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.050681    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending
	I1202 18:50:56.050686    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending
	I1202 18:50:56.050692    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.050696    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending
	I1202 18:50:56.050703    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.050714    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending
	I1202 18:50:56.050718    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending
	I1202 18:50:56.050741    5489 retry.go:31] will retry after 205.871252ms: missing components: kube-dns
	I1202 18:50:56.143193    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:56.143335    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:56.228586    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:56.284384    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.284464    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.284490    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.284527    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.284554    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.284576    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.284600    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.284632    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.284655    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.284676    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.284694    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.284714    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.284746    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.284772    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.284793    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.284814    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.284848    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.284873    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.284895    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.284916    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.284958    5489 retry.go:31] will retry after 250.577982ms: missing components: kube-dns
	I1202 18:50:56.495272    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:56.598015    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.598122    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.598175    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.598231    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.598273    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.598322    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.598361    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.598381    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.598404    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.598454    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.598489    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.598514    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.598541    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.598580    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.598620    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.598642    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.598664    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.598720    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.598755    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.598797    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.598840    5489 retry.go:31] will retry after 368.305825ms: missing components: kube-dns
	I1202 18:50:56.692788    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:56.693510    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:56.722445    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:56.972248    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.972287    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.972296    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.972306    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.972312    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.972317    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.972323    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.972327    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.972332    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.972338    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.972346    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.972351    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.972361    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.972369    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.972379    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.972385    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.972391    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.972400    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.972406    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.972412    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.972426    5489 retry.go:31] will retry after 501.793123ms: missing components: kube-dns
	I1202 18:50:56.987494    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:57.151379    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:57.151759    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:57.222529    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:57.479951    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:57.479988    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:57.479999    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:57.480008    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:57.480017    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:57.480027    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:57.480032    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:57.480040    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:57.480045    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:57.480053    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:57.480058    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:57.480065    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:57.480071    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:57.480077    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:57.480086    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:57.480092    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:57.480104    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:57.480110    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:57.480119    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:57.480128    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:57.480142    5489 retry.go:31] will retry after 503.085502ms: missing components: kube-dns
	I1202 18:50:57.488765    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:57.643136    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:57.643344    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:57.742782    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.006392    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:58.007061    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:58.007089    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Running
	I1202 18:50:58.007108    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:58.007121    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:58.007131    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:58.007140    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:58.007144    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:58.007149    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:58.007154    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:58.007165    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:58.007169    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:58.007174    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:58.007180    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:58.007186    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:58.007195    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:58.007201    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:58.007212    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:58.007217    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:58.007224    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:58.007233    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Running
	I1202 18:50:58.007241    5489 system_pods.go:126] duration metric: took 1.967812019s to wait for k8s-apps to be running ...
	I1202 18:50:58.007255    5489 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 18:50:58.007306    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 18:50:58.023186    5489 system_svc.go:56] duration metric: took 15.918666ms WaitForService to wait for kubelet
	I1202 18:50:58.023218    5489 kubeadm.go:587] duration metric: took 43.539081571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:50:58.023239    5489 node_conditions.go:102] verifying NodePressure condition ...
	I1202 18:50:58.026932    5489 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 18:50:58.026966    5489 node_conditions.go:123] node cpu capacity is 2
	I1202 18:50:58.026983    5489 node_conditions.go:105] duration metric: took 3.738071ms to run NodePressure ...
	I1202 18:50:58.026996    5489 start.go:242] waiting for startup goroutines ...
	I1202 18:50:58.142069    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:58.142504    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:58.223134    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.489127    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:58.642566    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:58.642824    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:58.723172    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.988075    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:59.141046    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:59.141582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:59.222449    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:59.488242    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:59.641430    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:59.642112    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:59.722928    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:59.988798    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:00.177671    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:00.180268    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:00.234017    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:00.489809    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:00.644231    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:00.644809    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:00.743709    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:00.988378    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:01.142614    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:01.143791    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:01.223846    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:01.490453    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:01.642585    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:01.643291    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:01.724244    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:01.988942    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:02.141351    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:02.141477    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:02.223468    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:02.488183    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:02.646274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:02.646796    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:02.742436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:02.987628    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:03.139595    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:03.140970    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:03.223271    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:03.488863    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:03.640961    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:03.641079    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:03.722661    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:03.987580    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:04.142365    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:04.142956    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:04.223461    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:04.488293    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:04.660718    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:04.667031    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:04.757016    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:04.988663    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:05.140629    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:05.140757    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:05.222693    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:05.489399    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:05.640819    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:05.641500    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:05.722695    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:05.988811    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:06.145492    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:06.145941    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:06.223506    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:06.488311    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:06.654707    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:06.654792    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:06.726255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:06.988675    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:07.141235    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:07.142449    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:07.226839    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:07.488442    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:07.644448    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:07.645035    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:07.727502    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:07.990619    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:08.147142    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:08.147697    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:08.224944    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:08.491009    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:08.644036    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:08.644411    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:08.725873    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:08.988692    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:09.142412    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:09.144270    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:09.223645    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:09.488275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:09.642271    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:09.642680    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:09.722571    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:09.988807    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:10.141019    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:10.141563    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:10.222342    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:10.487283    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:10.641105    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:10.641305    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:10.741537    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:10.988931    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:11.142978    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:11.144643    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:11.223014    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:11.489417    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:11.641281    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:11.641735    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:11.741307    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:11.988241    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:12.142114    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:12.142423    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:12.223364    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:12.489212    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:12.641561    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:12.641858    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:12.722366    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:12.989151    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:13.142350    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:13.142554    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:13.244188    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:13.488457    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:13.642158    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:13.642664    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:13.724230    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:13.988339    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:14.141425    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:14.142035    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:14.223255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:14.488409    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:14.640840    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:14.641043    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:14.723038    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:14.988859    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:15.140265    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:15.140527    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:15.222724    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:15.488310    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:15.647460    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:15.647593    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:15.723075    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:15.989412    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:16.139739    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:16.140917    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:16.223299    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:16.488405    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:16.641082    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:16.641254    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:16.727673    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:16.987830    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:17.140804    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:17.141336    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:17.223022    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:17.488623    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:17.641954    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:17.642004    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:17.723102    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:17.988321    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:18.142459    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:18.142787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:18.222896    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:18.489251    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:18.640518    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:18.642242    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:18.724076    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:18.989102    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:19.139721    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:19.141287    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:19.223313    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:19.489126    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:19.641855    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:19.642200    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:19.723609    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:19.988132    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:20.145249    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:20.145560    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:20.222777    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:20.488467    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:20.645468    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:20.645819    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:20.731529    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:20.995534    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:21.140705    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:21.141046    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:21.223049    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:21.487719    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:21.640603    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:21.640857    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:21.734417    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:21.987594    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:22.139875    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:22.140532    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:22.223283    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:22.487914    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:22.641468    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:22.641610    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:22.722681    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:22.990394    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:23.140500    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:23.140655    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:23.222424    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:23.487342    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:23.641034    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:23.641059    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:23.722827    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:23.988678    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:24.139780    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:24.141918    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:24.223095    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:24.489405    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:24.642133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:24.642504    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:24.722425    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:24.987972    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:25.142254    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:25.142426    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:25.223275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:25.488023    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:25.641188    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:25.641494    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:25.722486    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:25.988361    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:26.140631    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:26.140784    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:26.222972    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:26.488547    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:26.640926    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:26.641461    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:26.722528    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:26.988053    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:27.140714    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:27.141495    5489 kapi.go:107] duration metric: took 1m6.504930755s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 18:51:27.222119    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:27.488202    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:27.640592    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:27.722334    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:27.987885    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:28.141492    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:28.223388    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:28.487842    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:28.641530    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:28.722658    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:28.988677    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:29.140867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:29.222582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:29.487787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:29.641150    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:29.722935    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:29.988215    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:30.140724    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:30.223117    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:30.489275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:30.641437    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:30.723898    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:30.988318    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:31.140601    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:31.223108    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:31.488170    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:31.640546    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:31.722787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:31.988600    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:32.140840    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:32.222661    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:32.488450    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:32.643857    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:32.723224    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:32.989511    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:33.141026    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:33.223455    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:33.487656    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:33.641083    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:33.723186    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:33.989238    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:34.141737    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:34.242753    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:34.490083    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:34.641497    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:34.722347    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:34.987936    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:35.140867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:35.223203    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:35.495123    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:35.641642    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:35.722502    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:35.988101    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:36.141606    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:36.222966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:36.488414    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:36.640843    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:36.723164    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:36.989296    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:37.140275    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:37.223451    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:37.488436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:37.640654    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:37.723000    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:37.989110    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:38.144785    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:38.222869    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:38.488566    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:38.641041    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:38.740905    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:38.989088    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:39.142077    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:39.223320    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:39.487849    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:39.640966    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:39.726219    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:39.994244    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:40.142247    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:40.223916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:40.488727    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:40.641506    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:40.723554    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:40.990745    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:41.140952    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:41.223376    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:41.489750    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:41.640925    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:41.722637    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:41.988038    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:42.144983    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:42.224280    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:42.488876    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:42.641008    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:42.723420    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:42.988842    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:43.141251    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:43.223202    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:43.488573    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:43.641037    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:43.723134    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:43.988817    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:44.140892    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:44.223101    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:44.488906    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:44.641303    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:44.723928    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:44.989811    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:45.142353    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:45.228800    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:45.488264    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:45.642105    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:45.724163    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:45.988136    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:46.142608    5489 kapi.go:107] duration metric: took 1m25.505262125s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 18:51:46.222583    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:46.487957    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:46.722802    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:46.988731    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:47.223171    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:47.489434    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:47.722582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:47.988436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:48.224081    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:48.488881    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:48.723653    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:48.992852    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:49.223077    5489 kapi.go:107] duration metric: took 1m25.003485392s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 18:51:49.226325    5489 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-391119 cluster.
	I1202 18:51:49.229067    5489 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 18:51:49.231892    5489 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 18:51:49.488596    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:49.989229    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:50.487388    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:50.987773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:51.489671    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:51.990150    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:52.489011    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:52.988275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:53.488822    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:53.990165    5489 kapi.go:107] duration metric: took 1m33.005714318s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 18:51:53.993313    5489 out.go:179] * Enabled addons: nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 18:51:53.996264    5489 addons.go:530] duration metric: took 1m39.511594272s for enable addons: enabled=[nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 18:51:53.996319    5489 start.go:247] waiting for cluster config update ...
	I1202 18:51:53.996345    5489 start.go:256] writing updated cluster config ...
	I1202 18:51:53.996623    5489 ssh_runner.go:195] Run: rm -f paused
	I1202 18:51:54.001197    5489 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:51:54.004648    5489 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khwqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.013478    5489 pod_ready.go:94] pod "coredns-66bc5c9577-khwqf" is "Ready"
	I1202 18:51:54.013520    5489 pod_ready.go:86] duration metric: took 8.843388ms for pod "coredns-66bc5c9577-khwqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.035334    5489 pod_ready.go:83] waiting for pod "etcd-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.043889    5489 pod_ready.go:94] pod "etcd-addons-391119" is "Ready"
	I1202 18:51:54.043929    5489 pod_ready.go:86] duration metric: took 8.565675ms for pod "etcd-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.047583    5489 pod_ready.go:83] waiting for pod "kube-apiserver-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.061131    5489 pod_ready.go:94] pod "kube-apiserver-addons-391119" is "Ready"
	I1202 18:51:54.061193    5489 pod_ready.go:86] duration metric: took 13.582442ms for pod "kube-apiserver-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.089065    5489 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.404757    5489 pod_ready.go:94] pod "kube-controller-manager-addons-391119" is "Ready"
	I1202 18:51:54.404834    5489 pod_ready.go:86] duration metric: took 315.742493ms for pod "kube-controller-manager-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.605966    5489 pod_ready.go:83] waiting for pod "kube-proxy-z4z6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.004956    5489 pod_ready.go:94] pod "kube-proxy-z4z6m" is "Ready"
	I1202 18:51:55.005029    5489 pod_ready.go:86] duration metric: took 399.030605ms for pod "kube-proxy-z4z6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.205452    5489 pod_ready.go:83] waiting for pod "kube-scheduler-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.605702    5489 pod_ready.go:94] pod "kube-scheduler-addons-391119" is "Ready"
	I1202 18:51:55.605728    5489 pod_ready.go:86] duration metric: took 400.248478ms for pod "kube-scheduler-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.605741    5489 pod_ready.go:40] duration metric: took 1.604512353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:51:55.994032    5489 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 18:51:55.999587    5489 out.go:179] * Done! kubectl is now configured to use "addons-391119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 18:54:09 addons-391119 crio[831]: time="2025-12-02T18:54:09.430552889Z" level=info msg="Removed pod sandbox: 1d2b5a7cc1496caf48375ee676b8caab5c7f6c677d855240170eff7b192f89d7" id=3d19c103-7dd6-4b0c-8bf9-6c74df3cb547 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.927245302Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-689cp/POD" id=7b3bdbb0-ff0b-4eca-9301-bd42d77aadb4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.927311318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.943796086Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-689cp Namespace:default ID:4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba UID:56199c3f-7300-4dd1-9f35-c38c9c1fa3ff NetNS:/var/run/netns/8d8cb345-36af-46ba-9ffd-379067920ccd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002cb4688}] Aliases:map[]}"
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.943970775Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-689cp to CNI network \"kindnet\" (type=ptp)"
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.958683435Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-689cp Namespace:default ID:4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba UID:56199c3f-7300-4dd1-9f35-c38c9c1fa3ff NetNS:/var/run/netns/8d8cb345-36af-46ba-9ffd-379067920ccd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002cb4688}] Aliases:map[]}"
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.958987351Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-689cp for CNI network kindnet (type=ptp)"
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.967259596Z" level=info msg="Ran pod sandbox 4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba with infra container: default/hello-world-app-5d498dc89-689cp/POD" id=7b3bdbb0-ff0b-4eca-9301-bd42d77aadb4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.96881261Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=514e5841-eb47-4706-8a29-9e12274baeb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.96906587Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=514e5841-eb47-4706-8a29-9e12274baeb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.969170736Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=514e5841-eb47-4706-8a29-9e12274baeb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.97032817Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=381b9c5a-3624-4a92-be1f-1a77dec99610 name=/runtime.v1.ImageService/PullImage
	Dec 02 18:54:55 addons-391119 crio[831]: time="2025-12-02T18:54:55.973419536Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.604254628Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=381b9c5a-3624-4a92-be1f-1a77dec99610 name=/runtime.v1.ImageService/PullImage
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.60482425Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=88269e4a-cc38-4a8f-9de7-62fb2bb678f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.60925555Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=280bf597-65bd-4b40-9ca8-0586a3653905 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.61848106Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-689cp/hello-world-app" id=a047b713-c2af-4e1c-9264-ac6d36b8a125 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.618806251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.631198899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.631402388Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/39a00302856da945b5f3af441b4a3c3f8f3784ff4ef131be0d3a22080c37456d/merged/etc/passwd: no such file or directory"
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.631425567Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/39a00302856da945b5f3af441b4a3c3f8f3784ff4ef131be0d3a22080c37456d/merged/etc/group: no such file or directory"
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.631661743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.652037465Z" level=info msg="Created container 17a393e7545d00d291e198d79f8d2770dccdd0c428be1e99163420ed37f9a2a2: default/hello-world-app-5d498dc89-689cp/hello-world-app" id=a047b713-c2af-4e1c-9264-ac6d36b8a125 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.65498505Z" level=info msg="Starting container: 17a393e7545d00d291e198d79f8d2770dccdd0c428be1e99163420ed37f9a2a2" id=45da0732-08ef-4b0f-afbe-b97215cbaf38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 18:54:56 addons-391119 crio[831]: time="2025-12-02T18:54:56.659821629Z" level=info msg="Started container" PID=6983 containerID=17a393e7545d00d291e198d79f8d2770dccdd0c428be1e99163420ed37f9a2a2 description=default/hello-world-app-5d498dc89-689cp/hello-world-app id=45da0732-08ef-4b0f-afbe-b97215cbaf38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	17a393e7545d0       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   4c4fb8d81e14f       hello-world-app-5d498dc89-689cp            default
	ed150bb2c5f7d       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   33d2dfe09efe8       nginx                                      default
	da3aa953a1e44       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   8231900e582f7       busybox                                    default
	08bf95d396b25       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	e9ec8143d3c3b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	4936dfcaa8f43       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	667b638fd852e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	7bf7410b1e128       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   09fca3aeab6fe       gcp-auth-78565c9fb4-846nn                  gcp-auth
	16415d39f3c1d       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   cabac100f1519       ingress-nginx-controller-6c8bf45fb-wxm2s   ingress-nginx
	ece6af85dd624       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   84ca873e22231       gadget-htrz9                               gadget
	99c70d815e876       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	e58c1dd3e1586       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   321e6a28021f4       kube-ingress-dns-minikube                  kube-system
	4a404d74a7a80       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   334c1255b47d2       registry-proxy-8cmtn                       kube-system
	68e3c4d497333       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              patch                                    0                   71a0f8a89c4f6       ingress-nginx-admission-patch-fd76k        ingress-nginx
	ede9106262c1d       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   469ec2121a19a       csi-hostpath-resizer-0                     kube-system
	86afc0e10ae10       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   370e8a64064c8       nvidia-device-plugin-daemonset-jhzdp       kube-system
	56226ccdee89d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   0ffcdd2163447       ingress-nginx-admission-create-hbhz6       ingress-nginx
	eebd27b0fd019       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	efde85fa7a639       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   5b4e9f5d11f3a       snapshot-controller-7d9fbc56b8-8x8cp       kube-system
	e46cf04322b8a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   5e86465690254       csi-hostpath-attacher-0                    kube-system
	19488e1fccb37       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   587c230687612       local-path-provisioner-648f6765c9-dqshj    local-path-storage
	f9dabed489849       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   44b6efeca2b22       yakd-dashboard-5ff678cb9-9rt6b             yakd-dashboard
	c996f159fc220       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   498e2ed1789e6       metrics-server-85b7d694d7-8qm5c            kube-system
	ec8ebe2000d71       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   e298b111743b1       registry-6b586f9694-sb27k                  kube-system
	3d93f44570f5d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   360ea997f74a2       snapshot-controller-7d9fbc56b8-gmcbm       kube-system
	ded683f7ffbb6       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   1576a4d6b3588       cloud-spanner-emulator-5bdddb765-xl6n8     default
	a49e8bf8b4a18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   bf26e6271b773       storage-provisioner                        kube-system
	1c15eef657852       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   306193e1cd27c       coredns-66bc5c9577-khwqf                   kube-system
	560101125bfd0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   5aaec64c2a130       kindnet-zszgk                              kube-system
	35dda71f1d492       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   11499283350da       kube-proxy-z4z6m                           kube-system
	8e8e87c9645a2       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   8b237cd443d89       kube-apiserver-addons-391119               kube-system
	7c076f35e9904       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   2d297d0d7bbf8       kube-scheduler-addons-391119               kube-system
	0ebf58658f3b8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   41af2a338e6d1       etcd-addons-391119                         kube-system
	c2d0298aacf21       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   272e4608fdeb5       kube-controller-manager-addons-391119      kube-system
	
	
	==> coredns [1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa] <==
	[INFO] 10.244.0.18:50764 - 39341 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002093508s
	[INFO] 10.244.0.18:50764 - 7680 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000150562s
	[INFO] 10.244.0.18:50764 - 35129 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088047s
	[INFO] 10.244.0.18:40704 - 2959 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172042s
	[INFO] 10.244.0.18:40704 - 4214 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221904s
	[INFO] 10.244.0.18:52872 - 55205 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000153178s
	[INFO] 10.244.0.18:52872 - 55394 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000280307s
	[INFO] 10.244.0.18:42262 - 42970 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095555s
	[INFO] 10.244.0.18:42262 - 42517 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111391s
	[INFO] 10.244.0.18:43682 - 10228 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001343065s
	[INFO] 10.244.0.18:43682 - 10416 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001756095s
	[INFO] 10.244.0.18:56430 - 47188 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108732s
	[INFO] 10.244.0.18:56430 - 47583 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262296s
	[INFO] 10.244.0.21:48666 - 54747 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000278034s
	[INFO] 10.244.0.21:34649 - 14954 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000224226s
	[INFO] 10.244.0.21:46258 - 64870 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00023554s
	[INFO] 10.244.0.21:60819 - 921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00036184s
	[INFO] 10.244.0.21:60574 - 13583 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013562s
	[INFO] 10.244.0.21:59120 - 20787 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090435s
	[INFO] 10.244.0.21:50478 - 58800 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003074979s
	[INFO] 10.244.0.21:59277 - 64765 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003331376s
	[INFO] 10.244.0.21:33542 - 23755 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000608432s
	[INFO] 10.244.0.21:49064 - 64383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0046589s
	[INFO] 10.244.0.23:40952 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172588s
	[INFO] 10.244.0.23:59064 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156203s
	
	
	==> describe nodes <==
	Name:               addons-391119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-391119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=addons-391119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T18_50_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-391119
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-391119"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 18:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-391119
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 18:54:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 18:54:24 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 18:54:24 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 18:54:24 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 18:54:24 +0000   Tue, 02 Dec 2025 18:50:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-391119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                f89a01d6-7158-41c3-94b9-c90bb28284d1
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-5bdddb765-xl6n8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  default                     hello-world-app-5d498dc89-689cp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-htrz9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  gcp-auth                    gcp-auth-78565c9fb4-846nn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-wxm2s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m37s
	  kube-system                 coredns-66bc5c9577-khwqf                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m43s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-gdz4d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-addons-391119                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m48s
	  kube-system                 kindnet-zszgk                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m43s
	  kube-system                 kube-apiserver-addons-391119                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-391119       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-z4z6m                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-391119                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 metrics-server-85b7d694d7-8qm5c             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-jhzdp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-6b586f9694-sb27k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-creds-764b6fb674-nvw8r             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-proxy-8cmtn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-8x8cp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-gmcbm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  local-path-storage          local-path-provisioner-648f6765c9-dqshj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9rt6b              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node addons-391119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node addons-391119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s (x8 over 4m55s)  kubelet          Node addons-391119 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m48s                  kubelet          Node addons-391119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m48s                  kubelet          Node addons-391119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m48s                  kubelet          Node addons-391119 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     4m44s                  cidrAllocator    Node addons-391119 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           4m44s                  node-controller  Node addons-391119 event: Registered Node addons-391119 in Controller
	  Normal   NodeReady                4m2s                   kubelet          Node addons-391119 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c] <==
	{"level":"warn","ts":"2025-12-02T18:50:04.594444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.601739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.650721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.721009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.754509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.786604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.822251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.862181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.888045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.916527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.949310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.992315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.068088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.133749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.194299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.231304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.253845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.334227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:21.253398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:21.269587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.293029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.307847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.344292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.360028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54092","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [7bf7410b1e12852eba266f9783b75c5daa87d1cb4461c783399415a158482592] <==
	2025/12/02 18:51:48 GCP Auth Webhook started!
	2025/12/02 18:51:56 Ready to marshal response ...
	2025/12/02 18:51:56 Ready to write response ...
	2025/12/02 18:51:57 Ready to marshal response ...
	2025/12/02 18:51:57 Ready to write response ...
	2025/12/02 18:51:57 Ready to marshal response ...
	2025/12/02 18:51:57 Ready to write response ...
	2025/12/02 18:52:19 Ready to marshal response ...
	2025/12/02 18:52:19 Ready to write response ...
	2025/12/02 18:52:22 Ready to marshal response ...
	2025/12/02 18:52:22 Ready to write response ...
	2025/12/02 18:52:22 Ready to marshal response ...
	2025/12/02 18:52:22 Ready to write response ...
	2025/12/02 18:52:30 Ready to marshal response ...
	2025/12/02 18:52:30 Ready to write response ...
	2025/12/02 18:52:34 Ready to marshal response ...
	2025/12/02 18:52:34 Ready to write response ...
	2025/12/02 18:52:35 Ready to marshal response ...
	2025/12/02 18:52:35 Ready to write response ...
	2025/12/02 18:53:01 Ready to marshal response ...
	2025/12/02 18:53:01 Ready to write response ...
	2025/12/02 18:54:55 Ready to marshal response ...
	2025/12/02 18:54:55 Ready to write response ...
	
	
	==> kernel <==
	 18:54:57 up 37 min,  0 user,  load average: 0.26, 0.78, 0.43
	Linux addons-391119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026] <==
	I1202 18:52:55.244729       1 main.go:301] handling current node
	I1202 18:53:05.242426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:05.242525       1 main.go:301] handling current node
	I1202 18:53:15.243635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:15.243664       1 main.go:301] handling current node
	I1202 18:53:25.249627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:25.249693       1 main.go:301] handling current node
	I1202 18:53:35.249726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:35.249759       1 main.go:301] handling current node
	I1202 18:53:45.242447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:45.242537       1 main.go:301] handling current node
	I1202 18:53:55.246827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:53:55.246958       1 main.go:301] handling current node
	I1202 18:54:05.250109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:05.250139       1 main.go:301] handling current node
	I1202 18:54:15.243040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:15.243074       1 main.go:301] handling current node
	I1202 18:54:25.241769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:25.241866       1 main.go:301] handling current node
	I1202 18:54:35.242396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:35.242510       1 main.go:301] handling current node
	I1202 18:54:45.243664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:45.243725       1 main.go:301] handling current node
	I1202 18:54:55.241741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:54:55.241771       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b] <==
	E1202 18:51:18.705238       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.73.76:443: connect: connection refused" logger="UnhandledError"
	E1202 18:51:18.710593       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.73.76:443: connect: connection refused" logger="UnhandledError"
	W1202 18:51:19.699674       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:19.699730       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 18:51:19.699743       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 18:51:19.699841       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:19.699915       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 18:51:19.700990       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 18:51:23.745303       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:23.745395       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1202 18:51:23.745543       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 18:51:23.801297       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 18:52:06.525983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54906: use of closed network connection
	E1202 18:52:06.752384       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54938: use of closed network connection
	I1202 18:52:34.688080       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 18:52:35.116927       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.56.175"}
	I1202 18:52:48.254903       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1202 18:52:50.259917       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1202 18:54:55.805136       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.212.13"}
	
	
	==> kube-controller-manager [c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc] <==
	I1202 18:50:13.323540       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-391119"
	I1202 18:50:13.323603       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 18:50:13.323369       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 18:50:13.324090       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 18:50:13.325407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 18:50:13.325480       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 18:50:13.325757       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 18:50:13.325852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 18:50:13.326169       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 18:50:13.326696       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 18:50:13.326862       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 18:50:13.328283       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 18:50:13.329498       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 18:50:13.330484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	E1202 18:50:18.904759       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 18:50:43.286143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 18:50:43.286288       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 18:50:43.286338       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 18:50:43.327591       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 18:50:43.333515       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 18:50:43.387303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 18:50:43.434705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 18:50:58.333807       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1202 18:51:13.392282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 18:51:13.442812       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072] <==
	I1202 18:50:15.336168       1 server_linux.go:53] "Using iptables proxy"
	I1202 18:50:15.476483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 18:50:15.576607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 18:50:15.576689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 18:50:15.576776       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 18:50:15.642400       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 18:50:15.642452       1 server_linux.go:132] "Using iptables Proxier"
	I1202 18:50:15.649225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 18:50:15.659590       1 server.go:527] "Version info" version="v1.34.2"
	I1202 18:50:15.659614       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:50:15.661987       1 config.go:200] "Starting service config controller"
	I1202 18:50:15.662000       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 18:50:15.662022       1 config.go:106] "Starting endpoint slice config controller"
	I1202 18:50:15.662026       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 18:50:15.662069       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 18:50:15.662075       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 18:50:15.666845       1 config.go:309] "Starting node config controller"
	I1202 18:50:15.666865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 18:50:15.666873       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 18:50:15.762102       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 18:50:15.762160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 18:50:15.762174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753] <==
	I1202 18:50:07.522625       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:50:07.524933       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 18:50:07.525094       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:50:07.525115       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:50:07.525133       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 18:50:07.531617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 18:50:07.532396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 18:50:07.532542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 18:50:07.532644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 18:50:07.532689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 18:50:07.532932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 18:50:07.533005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 18:50:07.533074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 18:50:07.533122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 18:50:07.534612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 18:50:07.536591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 18:50:07.536689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 18:50:07.536747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 18:50:07.536787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 18:50:07.537610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 18:50:07.542342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 18:50:07.542580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 18:50:07.543680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 18:50:07.543694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1202 18:50:08.925739       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 18:53:09 addons-391119 kubelet[1273]: E1202 18:53:09.422460    1273 manager.go:1116] Failed to create existing container: /docker/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/crio-5ae9c6046b0ca2b2cb25ff1d83a426e7324b4f30443ec09a1f72a4c8b8a900e9: Error finding container 5ae9c6046b0ca2b2cb25ff1d83a426e7324b4f30443ec09a1f72a4c8b8a900e9: Status 404 returned error can't find the container with id 5ae9c6046b0ca2b2cb25ff1d83a426e7324b4f30443ec09a1f72a4c8b8a900e9
	Dec 02 18:53:09 addons-391119 kubelet[1273]: I1202 18:53:09.985420    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=8.637496113 podStartE2EDuration="8.98540314s" podCreationTimestamp="2025-12-02 18:53:01 +0000 UTC" firstStartedPulling="2025-12-02 18:53:02.257943408 +0000 UTC m=+173.178194423" lastFinishedPulling="2025-12-02 18:53:02.605850435 +0000 UTC m=+173.526101450" observedRunningTime="2025-12-02 18:53:03.316483971 +0000 UTC m=+174.236734985" watchObservedRunningTime="2025-12-02 18:53:09.98540314 +0000 UTC m=+180.905654163"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.240689    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhhf\" (UniqueName: \"kubernetes.io/projected/828748fa-d7ab-4e04-8da4-80386bdddab3-kube-api-access-9bhhf\") pod \"828748fa-d7ab-4e04-8da4-80386bdddab3\" (UID: \"828748fa-d7ab-4e04-8da4-80386bdddab3\") "
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.240806    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/828748fa-d7ab-4e04-8da4-80386bdddab3-gcp-creds\") pod \"828748fa-d7ab-4e04-8da4-80386bdddab3\" (UID: \"828748fa-d7ab-4e04-8da4-80386bdddab3\") "
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.240921    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^205c68c7-cfb0-11f0-9c98-e6f9d476d345\") pod \"828748fa-d7ab-4e04-8da4-80386bdddab3\" (UID: \"828748fa-d7ab-4e04-8da4-80386bdddab3\") "
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.241525    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/828748fa-d7ab-4e04-8da4-80386bdddab3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "828748fa-d7ab-4e04-8da4-80386bdddab3" (UID: "828748fa-d7ab-4e04-8da4-80386bdddab3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.243388    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828748fa-d7ab-4e04-8da4-80386bdddab3-kube-api-access-9bhhf" (OuterVolumeSpecName: "kube-api-access-9bhhf") pod "828748fa-d7ab-4e04-8da4-80386bdddab3" (UID: "828748fa-d7ab-4e04-8da4-80386bdddab3"). InnerVolumeSpecName "kube-api-access-9bhhf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.247563    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^205c68c7-cfb0-11f0-9c98-e6f9d476d345" (OuterVolumeSpecName: "task-pv-storage") pod "828748fa-d7ab-4e04-8da4-80386bdddab3" (UID: "828748fa-d7ab-4e04-8da4-80386bdddab3"). InnerVolumeSpecName "pvc-b3a79612-6118-4d12-8467-fe9300aeec6c". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.341943    1273 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/828748fa-d7ab-4e04-8da4-80386bdddab3-gcp-creds\") on node \"addons-391119\" DevicePath \"\""
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.342006    1273 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-b3a79612-6118-4d12-8467-fe9300aeec6c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^205c68c7-cfb0-11f0-9c98-e6f9d476d345\") on node \"addons-391119\" "
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.342022    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bhhf\" (UniqueName: \"kubernetes.io/projected/828748fa-d7ab-4e04-8da4-80386bdddab3-kube-api-access-9bhhf\") on node \"addons-391119\" DevicePath \"\""
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.346787    1273 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-b3a79612-6118-4d12-8467-fe9300aeec6c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^205c68c7-cfb0-11f0-9c98-e6f9d476d345") on node "addons-391119"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.366637    1273 scope.go:117] "RemoveContainer" containerID="f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.380975    1273 scope.go:117] "RemoveContainer" containerID="f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: E1202 18:53:10.381392    1273 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e\": container with ID starting with f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e not found: ID does not exist" containerID="f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.381431    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e"} err="failed to get container status \"f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e\": rpc error: code = NotFound desc = could not find container \"f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e\": container with ID starting with f3f4d755c36c8ce16a5024b048e92fcf2611bd15ae3b81e1a21f680c66071c5e not found: ID does not exist"
	Dec 02 18:53:10 addons-391119 kubelet[1273]: I1202 18:53:10.443358    1273 reconciler_common.go:299] "Volume detached for volume \"pvc-b3a79612-6118-4d12-8467-fe9300aeec6c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^205c68c7-cfb0-11f0-9c98-e6f9d476d345\") on node \"addons-391119\" DevicePath \"\""
	Dec 02 18:53:11 addons-391119 kubelet[1273]: I1202 18:53:11.243865    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828748fa-d7ab-4e04-8da4-80386bdddab3" path="/var/lib/kubelet/pods/828748fa-d7ab-4e04-8da4-80386bdddab3/volumes"
	Dec 02 18:53:50 addons-391119 kubelet[1273]: I1202 18:53:50.240888    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jhzdp" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:53:50 addons-391119 kubelet[1273]: I1202 18:53:50.240983    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-sb27k" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:53:59 addons-391119 kubelet[1273]: I1202 18:53:59.242241    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8cmtn" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:54:55 addons-391119 kubelet[1273]: I1202 18:54:55.240630    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-sb27k" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:54:55 addons-391119 kubelet[1273]: I1202 18:54:55.734139    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/56199c3f-7300-4dd1-9f35-c38c9c1fa3ff-gcp-creds\") pod \"hello-world-app-5d498dc89-689cp\" (UID: \"56199c3f-7300-4dd1-9f35-c38c9c1fa3ff\") " pod="default/hello-world-app-5d498dc89-689cp"
	Dec 02 18:54:55 addons-391119 kubelet[1273]: I1202 18:54:55.734215    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfbj\" (UniqueName: \"kubernetes.io/projected/56199c3f-7300-4dd1-9f35-c38c9c1fa3ff-kube-api-access-fnfbj\") pod \"hello-world-app-5d498dc89-689cp\" (UID: \"56199c3f-7300-4dd1-9f35-c38c9c1fa3ff\") " pod="default/hello-world-app-5d498dc89-689cp"
	Dec 02 18:54:55 addons-391119 kubelet[1273]: W1202 18:54:55.965575    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/crio-4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba WatchSource:0}: Error finding container 4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba: Status 404 returned error can't find the container with id 4c4fb8d81e14fe698823c3813a86dbb13506ea970e4d807e7b0a36942f52cbba
	
	
	==> storage-provisioner [a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17] <==
	W1202 18:54:33.823902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:35.826707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:35.830910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:37.834011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:37.838594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:39.841409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:39.848035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:41.850566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:41.855740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:43.858629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:43.862720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:45.865422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:45.869712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:47.872756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:47.876838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:49.879841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:49.887427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:51.890721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:51.895381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:53.898384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:53.903824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:55.906580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:55.911637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:57.915605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:54:57.926826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-391119 -n addons-391119
helpers_test.go:269: (dbg) Run:  kubectl --context addons-391119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r: exit status 1 (83.979863ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hbhz6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fd76k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nvw8r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (272.058988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:54:58.886904   14868 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:54:58.887128   14868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:54:58.887157   14868 out.go:374] Setting ErrFile to fd 2...
	I1202 18:54:58.887177   14868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:54:58.887462   14868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:54:58.887790   14868 mustload.go:66] Loading cluster: addons-391119
	I1202 18:54:58.888219   14868 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:54:58.888255   14868 addons.go:622] checking whether the cluster is paused
	I1202 18:54:58.888398   14868 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:54:58.888427   14868 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:54:58.888966   14868 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:54:58.906187   14868 ssh_runner.go:195] Run: systemctl --version
	I1202 18:54:58.906241   14868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:54:58.922710   14868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:54:59.032956   14868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:54:59.033050   14868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:54:59.077020   14868 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:54:59.077046   14868 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:54:59.077051   14868 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:54:59.077055   14868 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:54:59.077059   14868 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:54:59.077063   14868 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:54:59.077067   14868 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:54:59.077070   14868 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:54:59.077072   14868 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:54:59.077080   14868 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:54:59.077083   14868 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:54:59.077086   14868 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:54:59.077089   14868 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:54:59.077092   14868 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:54:59.077095   14868 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:54:59.077100   14868 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:54:59.077104   14868 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:54:59.077107   14868 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:54:59.077110   14868 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:54:59.077113   14868 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:54:59.077118   14868 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:54:59.077126   14868 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:54:59.077129   14868 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:54:59.077132   14868 cri.go:89] found id: ""
	I1202 18:54:59.077184   14868 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:54:59.092743   14868 out.go:203] 
	W1202 18:54:59.095751   14868 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:54:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:54:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:54:59.095775   14868 out.go:285] * 
	* 
	W1202 18:54:59.100545   14868 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:54:59.103458   14868 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable ingress --alsologtostderr -v=1: exit status 11 (282.284625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:54:59.168903   14981 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:54:59.169924   14981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:54:59.169940   14981 out.go:374] Setting ErrFile to fd 2...
	I1202 18:54:59.169947   14981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:54:59.170250   14981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:54:59.170556   14981 mustload.go:66] Loading cluster: addons-391119
	I1202 18:54:59.170941   14981 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:54:59.170959   14981 addons.go:622] checking whether the cluster is paused
	I1202 18:54:59.171061   14981 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:54:59.171074   14981 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:54:59.171586   14981 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:54:59.190082   14981 ssh_runner.go:195] Run: systemctl --version
	I1202 18:54:59.190137   14981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:54:59.209049   14981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:54:59.320733   14981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:54:59.320835   14981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:54:59.357023   14981 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:54:59.357046   14981 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:54:59.357052   14981 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:54:59.357056   14981 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:54:59.357059   14981 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:54:59.357063   14981 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:54:59.357066   14981 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:54:59.357070   14981 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:54:59.357074   14981 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:54:59.357080   14981 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:54:59.357083   14981 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:54:59.357087   14981 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:54:59.357091   14981 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:54:59.357094   14981 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:54:59.357097   14981 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:54:59.357106   14981 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:54:59.357114   14981 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:54:59.357119   14981 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:54:59.357122   14981 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:54:59.357125   14981 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:54:59.357130   14981 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:54:59.357133   14981 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:54:59.357136   14981 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:54:59.357139   14981 cri.go:89] found id: ""
	I1202 18:54:59.357198   14981 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:54:59.374028   14981 out.go:203] 
	W1202 18:54:59.377095   14981 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:54:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:54:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:54:59.377180   14981 out.go:285] * 
	* 
	W1202 18:54:59.382039   14981 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:54:59.385085   14981 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-htrz9" [7034a56d-6196-4952-9082-a0be6da89004] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003234245s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (260.7566ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:53:17.408410   13851 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:53:17.408639   13851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:17.408667   13851 out.go:374] Setting ErrFile to fd 2...
	I1202 18:53:17.408686   13851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:17.408977   13851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:53:17.409286   13851 mustload.go:66] Loading cluster: addons-391119
	I1202 18:53:17.409788   13851 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:17.409829   13851 addons.go:622] checking whether the cluster is paused
	I1202 18:53:17.409974   13851 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:17.410008   13851 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:53:17.410538   13851 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:53:17.431640   13851 ssh_runner.go:195] Run: systemctl --version
	I1202 18:53:17.431697   13851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:53:17.454389   13851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:53:17.556245   13851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:53:17.556345   13851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:53:17.584996   13851 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:53:17.585018   13851 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:53:17.585023   13851 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:53:17.585026   13851 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:53:17.585029   13851 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:53:17.585033   13851 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:53:17.585036   13851 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:53:17.585039   13851 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:53:17.585042   13851 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:53:17.585047   13851 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:53:17.585051   13851 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:53:17.585054   13851 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:53:17.585057   13851 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:53:17.585060   13851 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:53:17.585066   13851 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:53:17.585072   13851 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:53:17.585078   13851 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:53:17.585084   13851 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:53:17.585087   13851 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:53:17.585090   13851 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:53:17.585101   13851 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:53:17.585107   13851 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:53:17.585110   13851 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:53:17.585113   13851 cri.go:89] found id: ""
	I1202 18:53:17.585169   13851 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:53:17.599203   13851 out.go:203] 
	W1202 18:53:17.602129   13851 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:53:17.602149   13851 out.go:285] * 
	* 
	W1202 18:53:17.607015   13851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:53:17.610089   13851 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.011178ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003733048s
addons_test.go:463: (dbg) Run:  kubectl --context addons-391119 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (283.345793ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:34.134293   12797 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:34.134542   12797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:34.134579   12797 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:34.134599   12797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:34.135028   12797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:34.135439   12797 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:34.139535   12797 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:34.139599   12797 addons.go:622] checking whether the cluster is paused
	I1202 18:52:34.139769   12797 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:34.139803   12797 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:34.140397   12797 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:34.156682   12797 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:34.156732   12797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:34.173418   12797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:34.280240   12797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:34.280332   12797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:34.334846   12797 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:34.334869   12797 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:34.334878   12797 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:34.334882   12797 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:34.334886   12797 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:34.334890   12797 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:34.334893   12797 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:34.334900   12797 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:34.334926   12797 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:34.334940   12797 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:34.334944   12797 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:34.334948   12797 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:34.334958   12797 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:34.334961   12797 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:34.334964   12797 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:34.334969   12797 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:34.334975   12797 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:34.334980   12797 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:34.334983   12797 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:34.334986   12797 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:34.335006   12797 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:34.335015   12797 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:34.335019   12797 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:34.335022   12797 cri.go:89] found id: ""
	I1202 18:52:34.335111   12797 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:34.349206   12797 out.go:203] 
	W1202 18:52:34.352323   12797 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:34.352344   12797 out.go:285] * 
	* 
	W1202 18:52:34.357143   12797 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:34.360235   12797 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 18:52:30.458616    4470 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 18:52:30.463318    4470 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 18:52:30.463356    4470 kapi.go:107] duration metric: took 4.748176ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.765192ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-391119 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-391119 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c59813b4-9826-44fc-94cf-300b02278d59] Pending
helpers_test.go:352: "task-pv-pod" [c59813b4-9826-44fc-94cf-300b02278d59] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c59813b4-9826-44fc-94cf-300b02278d59] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003957027s
addons_test.go:572: (dbg) Run:  kubectl --context addons-391119 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-391119 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-391119 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-391119 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-391119 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-391119 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-391119 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [828748fa-d7ab-4e04-8da4-80386bdddab3] Pending
helpers_test.go:352: "task-pv-pod-restore" [828748fa-d7ab-4e04-8da4-80386bdddab3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [828748fa-d7ab-4e04-8da4-80386bdddab3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003595354s
addons_test.go:614: (dbg) Run:  kubectl --context addons-391119 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-391119 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-391119 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (300.515489ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:53:10.852217   13743 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:53:10.852946   13743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:10.852958   13743 out.go:374] Setting ErrFile to fd 2...
	I1202 18:53:10.852964   13743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:10.853195   13743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:53:10.853467   13743 mustload.go:66] Loading cluster: addons-391119
	I1202 18:53:10.853874   13743 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:10.853891   13743 addons.go:622] checking whether the cluster is paused
	I1202 18:53:10.853997   13743 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:10.854007   13743 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:53:10.854496   13743 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:53:10.872377   13743 ssh_runner.go:195] Run: systemctl --version
	I1202 18:53:10.872435   13743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:53:10.890448   13743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:53:10.997031   13743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:53:10.997139   13743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:53:11.032630   13743 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:53:11.032650   13743 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:53:11.032655   13743 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:53:11.032659   13743 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:53:11.032663   13743 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:53:11.032666   13743 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:53:11.032670   13743 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:53:11.032673   13743 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:53:11.032676   13743 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:53:11.032682   13743 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:53:11.032685   13743 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:53:11.032688   13743 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:53:11.032691   13743 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:53:11.032694   13743 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:53:11.032697   13743 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:53:11.032706   13743 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:53:11.032709   13743 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:53:11.032714   13743 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:53:11.032717   13743 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:53:11.032720   13743 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:53:11.032725   13743 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:53:11.032728   13743 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:53:11.032731   13743 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:53:11.032734   13743 cri.go:89] found id: ""
	I1202 18:53:11.032784   13743 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:53:11.050471   13743 out.go:203] 
	W1202 18:53:11.053422   13743 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:53:11.053449   13743 out.go:285] * 
	* 
	W1202 18:53:11.077514   13743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:53:11.080473   13743 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (265.687938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:53:11.134100   13785 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:53:11.134921   13785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:11.134986   13785 out.go:374] Setting ErrFile to fd 2...
	I1202 18:53:11.137004   13785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:53:11.137427   13785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:53:11.137870   13785 mustload.go:66] Loading cluster: addons-391119
	I1202 18:53:11.138322   13785 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:11.138361   13785 addons.go:622] checking whether the cluster is paused
	I1202 18:53:11.138521   13785 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:53:11.138565   13785 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:53:11.139138   13785 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:53:11.159173   13785 ssh_runner.go:195] Run: systemctl --version
	I1202 18:53:11.159226   13785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:53:11.178033   13785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:53:11.281573   13785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:53:11.281678   13785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:53:11.319100   13785 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:53:11.319117   13785 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:53:11.319123   13785 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:53:11.319131   13785 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:53:11.319135   13785 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:53:11.319139   13785 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:53:11.319143   13785 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:53:11.319146   13785 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:53:11.319149   13785 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:53:11.319154   13785 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:53:11.319157   13785 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:53:11.319160   13785 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:53:11.319163   13785 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:53:11.319166   13785 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:53:11.319169   13785 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:53:11.319174   13785 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:53:11.319177   13785 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:53:11.319180   13785 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:53:11.319183   13785 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:53:11.319186   13785 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:53:11.319190   13785 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:53:11.319193   13785 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:53:11.319196   13785 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:53:11.319199   13785 cri.go:89] found id: ""
	I1202 18:53:11.319253   13785 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:53:11.335027   13785 out.go:203] 
	W1202 18:53:11.338033   13785 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:53:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:53:11.338056   13785 out.go:285] * 
	* 
	W1202 18:53:11.342814   13785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:53:11.345892   13785 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-391119 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-391119 --alsologtostderr -v=1: exit status 11 (264.73705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:07.189745   11585 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:07.189940   11585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:07.189947   11585 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:07.189953   11585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:07.190301   11585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:07.190629   11585 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:07.191243   11585 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:07.191254   11585 addons.go:622] checking whether the cluster is paused
	I1202 18:52:07.191381   11585 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:07.191391   11585 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:07.192265   11585 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:07.209771   11585 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:07.209846   11585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:07.227998   11585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:07.331970   11585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:07.332049   11585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:07.363728   11585 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:07.363750   11585 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:07.363755   11585 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:07.363759   11585 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:07.363762   11585 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:07.363766   11585 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:07.363769   11585 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:07.363772   11585 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:07.363775   11585 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:07.363783   11585 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:07.363786   11585 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:07.363790   11585 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:07.363793   11585 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:07.363797   11585 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:07.363801   11585 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:07.363806   11585 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:07.363812   11585 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:07.363817   11585 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:07.363820   11585 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:07.363823   11585 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:07.363827   11585 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:07.363830   11585 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:07.363833   11585 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:07.363836   11585 cri.go:89] found id: ""
	I1202 18:52:07.363889   11585 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:07.378312   11585 out.go:203] 
	W1202 18:52:07.381178   11585 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:07.381199   11585 out.go:285] * 
	* 
	W1202 18:52:07.385984   11585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:07.388867   11585 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-391119 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-391119
helpers_test.go:243: (dbg) docker inspect addons-391119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41",
	        "Created": "2025-12-02T18:49:45.529726904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5891,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T18:49:45.594070391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/hostname",
	        "HostsPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/hosts",
	        "LogPath": "/var/lib/docker/containers/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41/01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41-json.log",
	        "Name": "/addons-391119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-391119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-391119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "01bfa6b917fd642692c755302507767955f6d222f5244ccc5f3c92a203693c41",
	                "LowerDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f2b18b93a421b917bfefd69533411381c3753105a6bd363c69e34f9320d11a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-391119",
	                "Source": "/var/lib/docker/volumes/addons-391119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-391119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-391119",
	                "name.minikube.sigs.k8s.io": "addons-391119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f72e2bc4f4b289a730b02700b7a47804af9018d946dcc264a7de0cc63184978",
	            "SandboxKey": "/var/run/docker/netns/9f72e2bc4f4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-391119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:3a:e7:d1:50:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a219337ad6b1bd266ea3e5b9061fe73db277be6ee58b370bfba7d0e5972d90e1",
	                    "EndpointID": "901b192e056705a0076af774de77bd44b8dc6af1c1247a61660a747cab5eef4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-391119",
	                        "01bfa6b917fd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-391119 -n addons-391119
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-391119 logs -n 25: (1.517540591s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-840542   │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ delete  │ -p download-only-840542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-840542   │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ start   │ -o=json --download-only -p download-only-790899 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-790899   │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-790899                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-790899   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ start   │ -o=json --download-only -p download-only-899383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-899383   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-899383                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-899383   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-840542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-840542   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-790899                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-790899   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-899383                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-899383   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ start   │ --download-only -p download-docker-936869 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-936869 │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ delete  │ -p download-docker-936869                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-936869 │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ start   │ --download-only -p binary-mirror-279600 --alsologtostderr --binary-mirror http://127.0.0.1:42717 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-279600   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ delete  │ -p binary-mirror-279600                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-279600   │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ addons  │ enable dashboard -p addons-391119                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-391119                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	│ start   │ -p addons-391119 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:51 UTC │
	│ addons  │ addons-391119 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:51 UTC │                     │
	│ addons  │ addons-391119 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	│ addons  │ enable headlamp -p addons-391119 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-391119          │ jenkins │ v1.37.0 │ 02 Dec 25 18:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:49:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:49:21.151560    5489 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:49:21.151692    5489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:21.151702    5489 out.go:374] Setting ErrFile to fd 2...
	I1202 18:49:21.151708    5489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:21.151963    5489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:49:21.152401    5489 out.go:368] Setting JSON to false
	I1202 18:49:21.153129    5489 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1900,"bootTime":1764699462,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:49:21.153193    5489 start.go:143] virtualization:  
	I1202 18:49:21.158367    5489 out.go:179] * [addons-391119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 18:49:21.161371    5489 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 18:49:21.161480    5489 notify.go:221] Checking for updates...
	I1202 18:49:21.167211    5489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:49:21.169991    5489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:49:21.172807    5489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:49:21.175629    5489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 18:49:21.178527    5489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 18:49:21.181482    5489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:49:21.220770    5489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:49:21.220887    5489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:21.303333    5489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:21.29235335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:21.303439    5489 docker.go:319] overlay module found
	I1202 18:49:21.306650    5489 out.go:179] * Using the docker driver based on user configuration
	I1202 18:49:21.309431    5489 start.go:309] selected driver: docker
	I1202 18:49:21.309460    5489 start.go:927] validating driver "docker" against <nil>
	I1202 18:49:21.309482    5489 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 18:49:21.310303    5489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:21.404424    5489 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:21.395321691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:21.404571    5489 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 18:49:21.404777    5489 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:49:21.407551    5489 out.go:179] * Using Docker driver with root privileges
	I1202 18:49:21.410265    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:49:21.410326    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:49:21.410334    5489 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 18:49:21.410407    5489 start.go:353] cluster config:
	{Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1202 18:49:21.413500    5489 out.go:179] * Starting "addons-391119" primary control-plane node in "addons-391119" cluster
	I1202 18:49:21.416280    5489 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 18:49:21.419162    5489 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 18:49:21.421980    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:21.422019    5489 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 18:49:21.422028    5489 cache.go:65] Caching tarball of preloaded images
	I1202 18:49:21.422122    5489 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 18:49:21.422134    5489 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 18:49:21.422488    5489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json ...
	I1202 18:49:21.422509    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json: {Name:mk35d744d67e94b85876ec704acb2daf7dc5017b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:21.422662    5489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 18:49:21.440793    5489 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 18:49:21.440916    5489 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 18:49:21.440934    5489 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 18:49:21.440938    5489 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 18:49:21.440945    5489 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 18:49:21.440950    5489 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1202 18:49:38.957451    5489 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1202 18:49:38.957489    5489 cache.go:243] Successfully downloaded all kic artifacts
	I1202 18:49:38.957535    5489 start.go:360] acquireMachinesLock for addons-391119: {Name:mkd9ba4106d5f0301c0e1410c2737c2451b7b344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 18:49:38.957680    5489 start.go:364] duration metric: took 120.908µs to acquireMachinesLock for "addons-391119"
	I1202 18:49:38.957770    5489 start.go:93] Provisioning new machine with config: &{Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 18:49:38.957854    5489 start.go:125] createHost starting for "" (driver="docker")
	I1202 18:49:38.961425    5489 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1202 18:49:38.961685    5489 start.go:159] libmachine.API.Create for "addons-391119" (driver="docker")
	I1202 18:49:38.961723    5489 client.go:173] LocalClient.Create starting
	I1202 18:49:38.961843    5489 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem
	I1202 18:49:39.247034    5489 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem
	I1202 18:49:39.426048    5489 cli_runner.go:164] Run: docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 18:49:39.441560    5489 cli_runner.go:211] docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 18:49:39.441683    5489 network_create.go:284] running [docker network inspect addons-391119] to gather additional debugging logs...
	I1202 18:49:39.441702    5489 cli_runner.go:164] Run: docker network inspect addons-391119
	W1202 18:49:39.456818    5489 cli_runner.go:211] docker network inspect addons-391119 returned with exit code 1
	I1202 18:49:39.456854    5489 network_create.go:287] error running [docker network inspect addons-391119]: docker network inspect addons-391119: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-391119 not found
	I1202 18:49:39.456868    5489 network_create.go:289] output of [docker network inspect addons-391119]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-391119 not found
	
	** /stderr **
	I1202 18:49:39.456956    5489 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 18:49:39.474269    5489 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001af4660}
	I1202 18:49:39.474321    5489 network_create.go:124] attempt to create docker network addons-391119 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 18:49:39.474382    5489 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-391119 addons-391119
	I1202 18:49:39.537327    5489 network_create.go:108] docker network addons-391119 192.168.49.0/24 created
	I1202 18:49:39.537368    5489 kic.go:121] calculated static IP "192.168.49.2" for the "addons-391119" container
	I1202 18:49:39.537450    5489 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 18:49:39.553383    5489 cli_runner.go:164] Run: docker volume create addons-391119 --label name.minikube.sigs.k8s.io=addons-391119 --label created_by.minikube.sigs.k8s.io=true
	I1202 18:49:39.579393    5489 oci.go:103] Successfully created a docker volume addons-391119
	I1202 18:49:39.579486    5489 cli_runner.go:164] Run: docker run --rm --name addons-391119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --entrypoint /usr/bin/test -v addons-391119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 18:49:41.343206    5489 cli_runner.go:217] Completed: docker run --rm --name addons-391119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --entrypoint /usr/bin/test -v addons-391119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (1.763672865s)
	I1202 18:49:41.343247    5489 oci.go:107] Successfully prepared a docker volume addons-391119
	I1202 18:49:41.343295    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:41.343304    5489 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 18:49:41.343363    5489 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-391119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 18:49:45.458717    5489 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-391119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.115299195s)
	I1202 18:49:45.458753    5489 kic.go:203] duration metric: took 4.11544498s to extract preloaded images to volume ...
	W1202 18:49:45.458891    5489 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 18:49:45.458997    5489 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 18:49:45.515084    5489 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-391119 --name addons-391119 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-391119 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-391119 --network addons-391119 --ip 192.168.49.2 --volume addons-391119:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 18:49:45.844880    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Running}}
	I1202 18:49:45.868171    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:45.893429    5489 cli_runner.go:164] Run: docker exec addons-391119 stat /var/lib/dpkg/alternatives/iptables
	I1202 18:49:45.944452    5489 oci.go:144] the created container "addons-391119" has a running status.
	I1202 18:49:45.944477    5489 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa...
	I1202 18:49:46.428698    5489 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 18:49:46.447607    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:46.469184    5489 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 18:49:46.469203    5489 kic_runner.go:114] Args: [docker exec --privileged addons-391119 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 18:49:46.509975    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:49:46.527350    5489 machine.go:94] provisionDockerMachine start ...
	I1202 18:49:46.527441    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:46.544504    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:46.544833    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:46.544843    5489 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 18:49:46.545539    5489 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 18:49:49.697126    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-391119
	
	I1202 18:49:49.697153    5489 ubuntu.go:182] provisioning hostname "addons-391119"
	I1202 18:49:49.697265    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:49.714460    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:49.714767    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:49.714782    5489 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-391119 && echo "addons-391119" | sudo tee /etc/hostname
	I1202 18:49:49.870646    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-391119
	
	I1202 18:49:49.870720    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:49.887924    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:49.888228    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:49.888251    5489 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-391119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-391119/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-391119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 18:49:50.038385    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 18:49:50.038412    5489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 18:49:50.038446    5489 ubuntu.go:190] setting up certificates
	I1202 18:49:50.038460    5489 provision.go:84] configureAuth start
	I1202 18:49:50.038523    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:50.067808    5489 provision.go:143] copyHostCerts
	I1202 18:49:50.067897    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 18:49:50.068033    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 18:49:50.068162    5489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 18:49:50.068302    5489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.addons-391119 san=[127.0.0.1 192.168.49.2 addons-391119 localhost minikube]
	I1202 18:49:50.427218    5489 provision.go:177] copyRemoteCerts
	I1202 18:49:50.427283    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 18:49:50.427326    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.446993    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:50.552940    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 18:49:50.569096    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 18:49:50.586441    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 18:49:50.602830    5489 provision.go:87] duration metric: took 564.348464ms to configureAuth
	I1202 18:49:50.602901    5489 ubuntu.go:206] setting minikube options for container-runtime
	I1202 18:49:50.603129    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:49:50.603264    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.619804    5489 main.go:143] libmachine: Using SSH client type: native
	I1202 18:49:50.620109    5489 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 18:49:50.620121    5489 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 18:49:50.913721    5489 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 18:49:50.913743    5489 machine.go:97] duration metric: took 4.386374668s to provisionDockerMachine
	I1202 18:49:50.913755    5489 client.go:176] duration metric: took 11.952023678s to LocalClient.Create
	I1202 18:49:50.913768    5489 start.go:167] duration metric: took 11.952083918s to libmachine.API.Create "addons-391119"
	I1202 18:49:50.913775    5489 start.go:293] postStartSetup for "addons-391119" (driver="docker")
	I1202 18:49:50.913785    5489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 18:49:50.913854    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 18:49:50.913908    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:50.931968    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.034154    5489 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 18:49:51.037647    5489 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 18:49:51.037696    5489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 18:49:51.037707    5489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 18:49:51.037775    5489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 18:49:51.037808    5489 start.go:296] duration metric: took 124.026512ms for postStartSetup
	I1202 18:49:51.038108    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:51.054407    5489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/config.json ...
	I1202 18:49:51.054683    5489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 18:49:51.054732    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.072070    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.174670    5489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 18:49:51.179336    5489 start.go:128] duration metric: took 12.221466489s to createHost
	I1202 18:49:51.179411    5489 start.go:83] releasing machines lock for "addons-391119", held for 12.221660406s
	I1202 18:49:51.179527    5489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-391119
	I1202 18:49:51.196987    5489 ssh_runner.go:195] Run: cat /version.json
	I1202 18:49:51.197015    5489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 18:49:51.197034    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.197078    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:49:51.217953    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.225771    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:49:51.412949    5489 ssh_runner.go:195] Run: systemctl --version
	I1202 18:49:51.418955    5489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 18:49:51.452458    5489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 18:49:51.456480    5489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 18:49:51.456555    5489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 18:49:51.482764    5489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 18:49:51.482835    5489 start.go:496] detecting cgroup driver to use...
	I1202 18:49:51.482875    5489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 18:49:51.482929    5489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 18:49:51.499582    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 18:49:51.512965    5489 docker.go:218] disabling cri-docker service (if available) ...
	I1202 18:49:51.513060    5489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 18:49:51.530359    5489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 18:49:51.549199    5489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 18:49:51.664754    5489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 18:49:51.786676    5489 docker.go:234] disabling docker service ...
	I1202 18:49:51.786760    5489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 18:49:51.806700    5489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 18:49:51.820254    5489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 18:49:51.950358    5489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 18:49:52.069421    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 18:49:52.083030    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 18:49:52.097906    5489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 18:49:52.097988    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.106848    5489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 18:49:52.106967    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.116120    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.124454    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.132705    5489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 18:49:52.140694    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.149162    5489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.162605    5489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:49:52.171030    5489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 18:49:52.178104    5489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 18:49:52.178191    5489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 18:49:52.191804    5489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 18:49:52.199257    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:49:52.317792    5489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 18:49:52.497329    5489 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 18:49:52.497413    5489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 18:49:52.500935    5489 start.go:564] Will wait 60s for crictl version
	I1202 18:49:52.500993    5489 ssh_runner.go:195] Run: which crictl
	I1202 18:49:52.504226    5489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 18:49:52.530484    5489 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 18:49:52.530614    5489 ssh_runner.go:195] Run: crio --version
	I1202 18:49:52.558221    5489 ssh_runner.go:195] Run: crio --version
	I1202 18:49:52.592715    5489 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 18:49:52.595477    5489 cli_runner.go:164] Run: docker network inspect addons-391119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 18:49:52.611211    5489 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 18:49:52.614768    5489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 18:49:52.623923    5489 kubeadm.go:884] updating cluster {Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 18:49:52.624033    5489 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:49:52.624093    5489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:49:52.657001    5489 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:49:52.657025    5489 crio.go:433] Images already preloaded, skipping extraction
	I1202 18:49:52.657080    5489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:49:52.683608    5489 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:49:52.683631    5489 cache_images.go:86] Images are preloaded, skipping loading
	I1202 18:49:52.683639    5489 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 18:49:52.683724    5489 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-391119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 18:49:52.683804    5489 ssh_runner.go:195] Run: crio config
	I1202 18:49:52.746863    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:49:52.746889    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:49:52.746911    5489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 18:49:52.746943    5489 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-391119 NodeName:addons-391119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 18:49:52.747094    5489 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-391119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 18:49:52.747179    5489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 18:49:52.754756    5489 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 18:49:52.754864    5489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 18:49:52.762161    5489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 18:49:52.774347    5489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 18:49:52.786860    5489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1202 18:49:52.799613    5489 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 18:49:52.802995    5489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 18:49:52.812175    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:49:52.930581    5489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:49:52.944910    5489 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119 for IP: 192.168.49.2
	I1202 18:49:52.944928    5489 certs.go:195] generating shared ca certs ...
	I1202 18:49:52.944943    5489 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:52.945062    5489 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 18:49:53.003181    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt ...
	I1202 18:49:53.003212    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt: {Name:mkd5b1a9f0fad7d0ecc11f2846b0a7f559226cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.003384    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key ...
	I1202 18:49:53.003399    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key: {Name:mk8ac871d12285a41ebadf8ebc95b8c667ac34ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.003475    5489 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 18:49:53.082891    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt ...
	I1202 18:49:53.082931    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt: {Name:mk22192dbf2731a3b3c66a7552e99ff805da04a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.083107    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key ...
	I1202 18:49:53.083119    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key: {Name:mkdcb754b25a4ed546d2e13cf9eb82c336b19234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.083195    5489 certs.go:257] generating profile certs ...
	I1202 18:49:53.083253    5489 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key
	I1202 18:49:53.083269    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt with IP's: []
	I1202 18:49:53.333281    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt ...
	I1202 18:49:53.333311    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: {Name:mkc369093f7111c2a19e4c8ebab715eb936404cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.333484    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key ...
	I1202 18:49:53.333497    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.key: {Name:mkc8bff54b56ba34d43f581da01a9dd0989cd180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.333585    5489 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb
	I1202 18:49:53.333603    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 18:49:53.686212    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb ...
	I1202 18:49:53.686242    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb: {Name:mkd3317e6b5fa90c4661316fcf9e65c07fa3648c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.686423    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb ...
	I1202 18:49:53.686438    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb: {Name:mkfdd3ae873064a062b9c5e5acfce475eb3ec12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.686521    5489 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt.5d6cb8eb -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt
	I1202 18:49:53.686602    5489 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key.5d6cb8eb -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key
	I1202 18:49:53.686664    5489 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key
	I1202 18:49:53.686683    5489 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt with IP's: []
	I1202 18:49:53.761580    5489 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt ...
	I1202 18:49:53.761607    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt: {Name:mkf899ff3aa2aa4efa224c71c03bb9e29baa4305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.761768    5489 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key ...
	I1202 18:49:53.761779    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key: {Name:mk158bcb5647a08b7e4ef0c069c9cb4748caa22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:49:53.761959    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 18:49:53.761998    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 18:49:53.762028    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 18:49:53.762060    5489 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 18:49:53.762635    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 18:49:53.780269    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 18:49:53.799769    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 18:49:53.816804    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 18:49:53.833558    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 18:49:53.850488    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 18:49:53.869689    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 18:49:53.886684    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 18:49:53.903508    5489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 18:49:53.920143    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 18:49:53.931931    5489 ssh_runner.go:195] Run: openssl version
	I1202 18:49:53.937989    5489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 18:49:53.946017    5489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.949332    5489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.949391    5489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:49:53.990931    5489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 18:49:53.998777    5489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 18:49:54.002240    5489 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 18:49:54.002293    5489 kubeadm.go:401] StartCluster: {Name:addons-391119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-391119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:49:54.002372    5489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:49:54.002439    5489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:49:54.034458    5489 cri.go:89] found id: ""
	I1202 18:49:54.034543    5489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 18:49:54.042910    5489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 18:49:54.050776    5489 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 18:49:54.050869    5489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 18:49:54.058786    5489 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 18:49:54.058805    5489 kubeadm.go:158] found existing configuration files:
	
	I1202 18:49:54.058859    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 18:49:54.066780    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 18:49:54.066851    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 18:49:54.077225    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 18:49:54.085202    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 18:49:54.085279    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 18:49:54.093137    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 18:49:54.101016    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 18:49:54.101090    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 18:49:54.108668    5489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 18:49:54.116185    5489 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 18:49:54.116278    5489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 18:49:54.123287    5489 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 18:49:54.164380    5489 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 18:49:54.164446    5489 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 18:49:54.188139    5489 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 18:49:54.188219    5489 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 18:49:54.188260    5489 kubeadm.go:319] OS: Linux
	I1202 18:49:54.188310    5489 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 18:49:54.188362    5489 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 18:49:54.188413    5489 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 18:49:54.188463    5489 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 18:49:54.188515    5489 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 18:49:54.188566    5489 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 18:49:54.188615    5489 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 18:49:54.188665    5489 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 18:49:54.188714    5489 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 18:49:54.259497    5489 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 18:49:54.259632    5489 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 18:49:54.259756    5489 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 18:49:54.267224    5489 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 18:49:54.273761    5489 out.go:252]   - Generating certificates and keys ...
	I1202 18:49:54.273855    5489 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 18:49:54.273928    5489 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 18:49:54.923195    5489 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 18:49:55.128810    5489 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 18:49:55.401464    5489 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 18:49:56.012419    5489 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 18:49:56.410899    5489 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 18:49:56.411252    5489 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-391119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 18:49:56.674333    5489 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 18:49:56.674719    5489 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-391119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 18:49:57.842638    5489 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 18:49:58.271773    5489 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 18:49:58.314443    5489 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 18:49:58.314994    5489 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 18:49:58.498962    5489 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 18:49:58.761365    5489 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 18:50:00.740171    5489 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 18:50:00.930579    5489 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 18:50:01.408648    5489 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 18:50:01.409489    5489 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 18:50:01.412284    5489 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 18:50:01.415804    5489 out.go:252]   - Booting up control plane ...
	I1202 18:50:01.415914    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 18:50:01.415999    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 18:50:01.416070    5489 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 18:50:01.432270    5489 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 18:50:01.432586    5489 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 18:50:01.441414    5489 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 18:50:01.445692    5489 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 18:50:01.445765    5489 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 18:50:01.582143    5489 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 18:50:01.582264    5489 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 18:50:02.581293    5489 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001205626s
	I1202 18:50:02.585927    5489 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 18:50:02.586023    5489 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1202 18:50:02.586334    5489 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 18:50:02.586424    5489 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 18:50:05.263166    5489 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.676256657s
	I1202 18:50:07.544411    5489 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.957227339s
	I1202 18:50:08.088575    5489 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50224687s
	I1202 18:50:08.125715    5489 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 18:50:08.641235    5489 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 18:50:08.655424    5489 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 18:50:08.655634    5489 kubeadm.go:319] [mark-control-plane] Marking the node addons-391119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 18:50:08.666468    5489 kubeadm.go:319] [bootstrap-token] Using token: njyjbc.wmlogeow2ifd8inq
	I1202 18:50:08.669342    5489 out.go:252]   - Configuring RBAC rules ...
	I1202 18:50:08.669469    5489 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 18:50:08.674331    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 18:50:08.682471    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 18:50:08.689231    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 18:50:08.693171    5489 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 18:50:08.699540    5489 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 18:50:08.838421    5489 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 18:50:09.280619    5489 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 18:50:09.842896    5489 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 18:50:09.844119    5489 kubeadm.go:319] 
	I1202 18:50:09.844193    5489 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 18:50:09.844207    5489 kubeadm.go:319] 
	I1202 18:50:09.844284    5489 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 18:50:09.844288    5489 kubeadm.go:319] 
	I1202 18:50:09.844313    5489 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 18:50:09.844372    5489 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 18:50:09.844422    5489 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 18:50:09.844426    5489 kubeadm.go:319] 
	I1202 18:50:09.844480    5489 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 18:50:09.844483    5489 kubeadm.go:319] 
	I1202 18:50:09.844531    5489 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 18:50:09.844535    5489 kubeadm.go:319] 
	I1202 18:50:09.844587    5489 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 18:50:09.844662    5489 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 18:50:09.844730    5489 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 18:50:09.844735    5489 kubeadm.go:319] 
	I1202 18:50:09.844819    5489 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 18:50:09.844897    5489 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 18:50:09.844901    5489 kubeadm.go:319] 
	I1202 18:50:09.844986    5489 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token njyjbc.wmlogeow2ifd8inq \
	I1202 18:50:09.845089    5489 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b \
	I1202 18:50:09.845110    5489 kubeadm.go:319] 	--control-plane 
	I1202 18:50:09.845113    5489 kubeadm.go:319] 
	I1202 18:50:09.845199    5489 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 18:50:09.845203    5489 kubeadm.go:319] 
	I1202 18:50:09.845285    5489 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token njyjbc.wmlogeow2ifd8inq \
	I1202 18:50:09.845387    5489 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b 
	I1202 18:50:09.847691    5489 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1202 18:50:09.847913    5489 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 18:50:09.848027    5489 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 18:50:09.848042    5489 cni.go:84] Creating CNI manager for ""
	I1202 18:50:09.848050    5489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:50:09.851085    5489 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 18:50:09.853919    5489 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 18:50:09.857513    5489 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 18:50:09.857529    5489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 18:50:09.871760    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 18:50:10.187180    5489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 18:50:10.187319    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:10.187405    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-391119 minikube.k8s.io/updated_at=2025_12_02T18_50_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=addons-391119 minikube.k8s.io/primary=true
	I1202 18:50:10.375280    5489 ops.go:34] apiserver oom_adj: -16
	I1202 18:50:10.375297    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:10.875870    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:11.375784    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:11.876189    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:12.376163    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:12.876195    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:13.375549    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:13.875607    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:14.376319    5489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 18:50:14.483282    5489 kubeadm.go:1114] duration metric: took 4.296015962s to wait for elevateKubeSystemPrivileges
	I1202 18:50:14.483316    5489 kubeadm.go:403] duration metric: took 20.481030975s to StartCluster
	I1202 18:50:14.483334    5489 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:50:14.483455    5489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:50:14.483881    5489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:50:14.484100    5489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 18:50:14.484259    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 18:50:14.484565    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:50:14.484663    5489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 18:50:14.484779    5489 addons.go:70] Setting yakd=true in profile "addons-391119"
	I1202 18:50:14.484803    5489 addons.go:239] Setting addon yakd=true in "addons-391119"
	I1202 18:50:14.484832    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.485403    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.485946    5489 addons.go:70] Setting metrics-server=true in profile "addons-391119"
	I1202 18:50:14.485964    5489 addons.go:239] Setting addon metrics-server=true in "addons-391119"
	I1202 18:50:14.485984    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.486393    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.486522    5489 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-391119"
	I1202 18:50:14.486540    5489 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-391119"
	I1202 18:50:14.486557    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.487036    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490712    5489 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-391119"
	I1202 18:50:14.491036    5489 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-391119"
	I1202 18:50:14.491139    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.493547    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490867    5489 addons.go:70] Setting cloud-spanner=true in profile "addons-391119"
	I1202 18:50:14.497817    5489 addons.go:239] Setting addon cloud-spanner=true in "addons-391119"
	I1202 18:50:14.497893    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.498381    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.490877    5489 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-391119"
	I1202 18:50:14.490888    5489 addons.go:70] Setting default-storageclass=true in profile "addons-391119"
	I1202 18:50:14.490892    5489 addons.go:70] Setting gcp-auth=true in profile "addons-391119"
	I1202 18:50:14.490895    5489 addons.go:70] Setting ingress=true in profile "addons-391119"
	I1202 18:50:14.490898    5489 addons.go:70] Setting ingress-dns=true in profile "addons-391119"
	I1202 18:50:14.490901    5489 addons.go:70] Setting inspektor-gadget=true in profile "addons-391119"
	I1202 18:50:14.490935    5489 out.go:179] * Verifying Kubernetes components...
	I1202 18:50:14.490952    5489 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-391119"
	I1202 18:50:14.490956    5489 addons.go:70] Setting registry=true in profile "addons-391119"
	I1202 18:50:14.490962    5489 addons.go:70] Setting registry-creds=true in profile "addons-391119"
	I1202 18:50:14.490968    5489 addons.go:70] Setting storage-provisioner=true in profile "addons-391119"
	I1202 18:50:14.490975    5489 addons.go:70] Setting volumesnapshots=true in profile "addons-391119"
	I1202 18:50:14.490986    5489 addons.go:70] Setting volcano=true in profile "addons-391119"
	I1202 18:50:14.498706    5489 addons.go:239] Setting addon volcano=true in "addons-391119"
	I1202 18:50:14.498734    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.502902    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.504261    5489 addons.go:239] Setting addon inspektor-gadget=true in "addons-391119"
	I1202 18:50:14.504330    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.504827    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.541882    5489 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-391119"
	I1202 18:50:14.542286    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.543068    5489 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-391119"
	I1202 18:50:14.543104    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.543543    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.565619    5489 addons.go:239] Setting addon registry=true in "addons-391119"
	I1202 18:50:14.565761    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.566236    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.581854    5489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-391119"
	I1202 18:50:14.582203    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.589320    5489 addons.go:239] Setting addon registry-creds=true in "addons-391119"
	I1202 18:50:14.589383    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.589902    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.605200    5489 mustload.go:66] Loading cluster: addons-391119
	I1202 18:50:14.605423    5489 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:50:14.605715    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.621853    5489 addons.go:239] Setting addon storage-provisioner=true in "addons-391119"
	I1202 18:50:14.621901    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.622368    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.632883    5489 addons.go:239] Setting addon ingress=true in "addons-391119"
	I1202 18:50:14.632937    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.633403    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.637834    5489 addons.go:239] Setting addon volumesnapshots=true in "addons-391119"
	I1202 18:50:14.637891    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.638382    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.641057    5489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:50:14.655814    5489 addons.go:239] Setting addon ingress-dns=true in "addons-391119"
	I1202 18:50:14.656292    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.656826    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.677517    5489 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 18:50:14.687544    5489 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 18:50:14.708112    5489 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 18:50:14.720792    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 18:50:14.720881    5489 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 18:50:14.749977    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.721309    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 18:50:14.759093    5489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 18:50:14.759249    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.721325    5489 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 18:50:14.793769    5489 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1202 18:50:14.793956    5489 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 18:50:14.721401    5489 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 18:50:14.794413    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 18:50:14.794479    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.806598    5489 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 18:50:14.806627    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 18:50:14.806686    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.813621    5489 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 18:50:14.814749    5489 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-391119"
	I1202 18:50:14.814800    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.815228    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.823828    5489 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 18:50:14.823855    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 18:50:14.823924    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.836345    5489 addons.go:239] Setting addon default-storageclass=true in "addons-391119"
	I1202 18:50:14.840746    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.841324    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:14.842597    5489 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 18:50:14.842612    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 18:50:14.842656    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.867012    5489 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 18:50:14.872269    5489 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 18:50:14.872328    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 18:50:14.872433    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.887178    5489 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 18:50:14.892532    5489 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 18:50:14.892643    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 18:50:14.893074    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:14.900250    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 18:50:14.900336    5489 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 18:50:14.900346    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 18:50:14.900399    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.916909    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 18:50:14.916931    5489 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 18:50:14.916996    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:14.944361    5489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 18:50:14.947463    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 18:50:14.947610    5489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:50:14.947624    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 18:50:14.947690    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.024002    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 18:50:15.024815    5489 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 18:50:15.032798    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 18:50:15.034973    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 18:50:15.036063    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.047433    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 18:50:15.047610    5489 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 18:50:15.047649    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:15.053780    5489 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 18:50:15.053807    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 18:50:15.053876    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.059935    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:15.060079    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 18:50:15.067565    5489 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 18:50:15.067589    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 18:50:15.067654    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.070918    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 18:50:15.075695    5489 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 18:50:15.079309    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 18:50:15.079394    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 18:50:15.079459    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.118705    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.119766    5489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 18:50:15.119784    5489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 18:50:15.119846    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.119899    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.120573    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.139654    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.139756    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.142697    5489 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 18:50:15.147039    5489 out.go:179]   - Using image docker.io/busybox:stable
	I1202 18:50:15.151426    5489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 18:50:15.151451    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 18:50:15.151518    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:15.172029    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.179893    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.185814    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.229470    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.253768    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.254713    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.255391    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	W1202 18:50:15.258012    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258041    5489 retry.go:31] will retry after 266.594916ms: ssh: handshake failed: EOF
	W1202 18:50:15.258115    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258124    5489 retry.go:31] will retry after 215.927281ms: ssh: handshake failed: EOF
	W1202 18:50:15.258162    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.258177    5489 retry.go:31] will retry after 364.277984ms: ssh: handshake failed: EOF
	I1202 18:50:15.262220    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	W1202 18:50:15.266337    5489 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1202 18:50:15.266366    5489 retry.go:31] will retry after 359.391623ms: ssh: handshake failed: EOF
	I1202 18:50:15.268055    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:15.272457    5489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:50:15.587083    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 18:50:15.743964    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 18:50:15.791340    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 18:50:15.857537    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 18:50:15.857611    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 18:50:15.887462    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:50:15.923060    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 18:50:15.928698    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 18:50:15.951419    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 18:50:15.969005    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 18:50:15.985451    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 18:50:15.985526    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 18:50:15.994364    5489 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 18:50:15.994433    5489 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 18:50:16.039950    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 18:50:16.040026    5489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 18:50:16.085598    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 18:50:16.085692    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 18:50:16.109584    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 18:50:16.124733    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 18:50:16.124753    5489 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 18:50:16.129246    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 18:50:16.129266    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 18:50:16.176722    5489 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 18:50:16.176808    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 18:50:16.203143    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 18:50:16.205937    5489 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 18:50:16.206016    5489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 18:50:16.219746    5489 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 18:50:16.219818    5489 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 18:50:16.236009    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 18:50:16.236081    5489 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 18:50:16.312885    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 18:50:16.351861    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 18:50:16.351939    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 18:50:16.363488    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 18:50:16.363568    5489 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 18:50:16.378293    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 18:50:16.378368    5489 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 18:50:16.440891    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 18:50:16.514248    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 18:50:16.514324    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 18:50:16.517032    5489 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 18:50:16.517099    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 18:50:16.549106    5489 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:16.549179    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 18:50:16.635140    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 18:50:16.655075    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 18:50:16.655150    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 18:50:16.682750    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:16.847895    5489 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 18:50:16.847968    5489 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 18:50:16.897511    5489 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.625019097s)
	I1202 18:50:16.897638    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.310525462s)
	I1202 18:50:16.897836    5489 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.87299565s)
	I1202 18:50:16.897953    5489 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 18:50:16.899418    5489 node_ready.go:35] waiting up to 6m0s for node "addons-391119" to be "Ready" ...
	I1202 18:50:17.125383    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 18:50:17.125449    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 18:50:17.333014    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 18:50:17.333087    5489 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 18:50:17.403088    5489 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-391119" context rescaled to 1 replicas
	I1202 18:50:17.615243    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 18:50:17.615313    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 18:50:17.760927    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 18:50:17.760995    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 18:50:17.878235    5489 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 18:50:17.878312    5489 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 18:50:18.082366    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1202 18:50:18.956574    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:19.746765    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.002717356s)
	I1202 18:50:19.746814    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.95541696s)
	I1202 18:50:19.746884    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.859404158s)
	I1202 18:50:19.746922    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.823791869s)
	I1202 18:50:20.623652    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.672152267s)
	I1202 18:50:20.623803    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.654735342s)
	I1202 18:50:20.623840    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.514163358s)
	I1202 18:50:20.623857    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.420695722s)
	I1202 18:50:20.623911    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.310956263s)
	I1202 18:50:20.624411    5489 addons.go:495] Verifying addon metrics-server=true in "addons-391119"
	I1202 18:50:20.623933    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.182971127s)
	I1202 18:50:20.624423    5489 addons.go:495] Verifying addon registry=true in "addons-391119"
	I1202 18:50:20.623960    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.988749592s)
	I1202 18:50:20.624849    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.696088232s)
	I1202 18:50:20.624893    5489 addons.go:495] Verifying addon ingress=true in "addons-391119"
	I1202 18:50:20.624034    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.941212597s)
	W1202 18:50:20.629131    5489 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 18:50:20.629166    5489 retry.go:31] will retry after 223.057158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 18:50:20.630013    5489 out.go:179] * Verifying registry addon...
	I1202 18:50:20.630072    5489 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-391119 service yakd-dashboard -n yakd-dashboard
	
	I1202 18:50:20.631875    5489 out.go:179] * Verifying ingress addon...
	I1202 18:50:20.636566    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 18:50:20.637343    5489 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 18:50:20.647608    5489 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 18:50:20.647627    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:20.647922    5489 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 18:50:20.647942    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:20.650205    5489 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 18:50:20.852930    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 18:50:20.976900    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.894440669s)
	I1202 18:50:20.976934    5489 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-391119"
	I1202 18:50:20.979927    5489 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 18:50:20.984451    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 18:50:20.991633    5489 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 18:50:20.991701    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:21.141514    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:21.142457    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:21.402603    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:21.487846    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:21.640427    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:21.640790    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:21.988282    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:22.141234    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:22.141368    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:22.488247    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:22.503184    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 18:50:22.503330    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:22.521446    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:22.634611    5489 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 18:50:22.641868    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:22.642228    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:22.649507    5489 addons.go:239] Setting addon gcp-auth=true in "addons-391119"
	I1202 18:50:22.649551    5489 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:50:22.650022    5489 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:50:22.667902    5489 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 18:50:22.667956    5489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:50:22.685904    5489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:50:22.987436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:23.140491    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:23.140634    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 18:50:23.403414    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:23.487759    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:23.640704    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:23.640986    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:23.649127    5489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796096203s)
	I1202 18:50:23.652361    5489 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 18:50:23.655166    5489 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 18:50:23.658064    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 18:50:23.658092    5489 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 18:50:23.672073    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 18:50:23.672137    5489 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 18:50:23.684516    5489 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 18:50:23.684536    5489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 18:50:23.700274    5489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 18:50:23.987820    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:24.145903    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:24.146683    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:24.212984    5489 addons.go:495] Verifying addon gcp-auth=true in "addons-391119"
	I1202 18:50:24.215979    5489 out.go:179] * Verifying gcp-auth addon...
	I1202 18:50:24.219588    5489 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 18:50:24.244837    5489 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 18:50:24.244861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:24.487237    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:24.640567    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:24.641002    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:24.722770    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:24.987458    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:25.140748    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:25.140920    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:25.222587    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:25.488360    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:25.641625    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:25.641808    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:25.722751    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:25.902678    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:25.987650    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:26.139765    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:26.140846    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:26.222615    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:26.488096    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:26.640542    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:26.641056    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:26.722970    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:26.993397    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:27.140152    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:27.140327    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:27.222861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:27.487703    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:27.640003    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:27.640927    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:27.722600    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:27.905297    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:27.988359    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:28.141159    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:28.141306    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:28.222968    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:28.487745    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:28.640754    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:28.640893    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:28.722816    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:28.987900    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:29.140224    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:29.140262    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:29.222937    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:29.488049    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:29.640593    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:29.640743    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:29.722458    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:29.987773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:30.140270    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:30.143122    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:30.222908    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:30.402309    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:30.488122    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:30.640341    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:30.640474    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:30.723274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:30.988404    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:31.141353    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:31.141927    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:31.222261    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:31.487720    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:31.639721    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:31.640352    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:31.722822    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:31.988447    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:32.139383    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:32.139929    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:32.223094    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:32.403062    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:32.488141    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:32.640211    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:32.640442    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:32.723059    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:32.987204    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:33.140366    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:33.140675    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:33.223217    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:33.488259    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:33.642738    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:33.643262    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:33.722830    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:33.987156    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:34.140499    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:34.140907    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:34.222945    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:34.487726    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:34.640608    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:34.640754    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:34.722688    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:34.902721    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:34.987363    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:35.140524    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:35.140698    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:35.222383    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:35.487574    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:35.640667    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:35.641222    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:35.722966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:35.990299    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:36.140843    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:36.141150    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:36.224317    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:36.487857    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:36.639538    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:36.640842    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:36.722418    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:36.903070    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:36.987699    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:37.139317    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:37.140235    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:37.223275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:37.487847    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:37.639389    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:37.640437    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:37.723385    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:37.987854    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:38.141113    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:38.141444    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:38.223331    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:38.488137    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:38.640338    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:38.641091    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:38.722755    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:38.988112    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:39.140222    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:39.140272    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:39.223125    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:39.402877    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:39.487820    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:39.641121    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:39.641389    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:39.722927    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:39.988100    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:40.141084    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:40.141713    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:40.222747    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:40.488107    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:40.640406    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:40.640662    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:40.723373    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:40.987981    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:41.139817    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:41.140155    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:41.223127    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:41.403474    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:41.488592    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:41.640394    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:41.640549    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:41.722295    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:41.987849    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:42.140918    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:42.141704    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:42.222906    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:42.487833    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:42.639639    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:42.640973    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:42.723133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:42.988070    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:43.140130    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:43.140407    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:43.223519    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:43.487767    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:43.640596    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:43.640804    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:43.722602    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:43.903306    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:43.988189    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:44.140608    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:44.140680    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:44.223017    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:44.487714    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:44.639642    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:44.641007    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:44.723057    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:44.988278    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:45.142164    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:45.142609    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:45.224697    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:45.488057    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:45.640133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:45.640415    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:45.723274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:45.903512    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:45.988089    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:46.140387    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:46.140510    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:46.222701    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:46.487491    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:46.639861    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:46.641279    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:46.723135    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:46.988116    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:47.140827    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:47.141245    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:47.222977    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:47.487845    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:47.640718    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:47.640805    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:47.722560    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:47.904069    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:47.988263    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:48.140885    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:48.141365    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:48.223155    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:48.487837    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:48.640210    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:48.640565    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:48.722487    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:48.988059    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:49.140115    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:49.140179    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:49.223014    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:49.487876    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:49.641015    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:49.641137    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:49.723264    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:49.987933    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:50.140847    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:50.141019    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:50.222708    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:50.402684    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:50.487916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:50.639651    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:50.640776    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:50.722708    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:50.988294    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:51.140916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:51.141018    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:51.224328    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:51.487738    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:51.639618    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:51.640778    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:51.722664    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:51.987409    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:52.139496    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:52.141009    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:52.222778    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:52.487521    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:52.640979    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:52.641095    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:52.722966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:52.903072    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:52.987941    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:53.140664    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:53.140711    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:53.223255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:53.488251    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:53.640558    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:53.640690    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:53.722551    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:53.987574    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:54.139595    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:54.140424    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:54.223331    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:54.487738    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:54.639952    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:54.641867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:54.722773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:54.987886    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:55.141702    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:55.142420    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:55.224522    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1202 18:50:55.403686    5489 node_ready.go:57] node "addons-391119" has "Ready":"False" status (will retry)
	I1202 18:50:55.487521    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:55.663776    5489 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 18:50:55.663892    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:55.678692    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:55.724279    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:55.924752    5489 node_ready.go:49] node "addons-391119" is "Ready"
	I1202 18:50:55.924838    5489 node_ready.go:38] duration metric: took 39.025077955s for node "addons-391119" to be "Ready" ...
	I1202 18:50:55.924868    5489 api_server.go:52] waiting for apiserver process to appear ...
	I1202 18:50:55.924954    5489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:50:55.967154    5489 api_server.go:72] duration metric: took 41.483013719s to wait for apiserver process to appear ...
	I1202 18:50:55.967209    5489 api_server.go:88] waiting for apiserver healthz status ...
	I1202 18:50:55.967239    5489 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 18:50:56.004124    5489 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 18:50:56.005237    5489 api_server.go:141] control plane version: v1.34.2
	I1202 18:50:56.005267    5489 api_server.go:131] duration metric: took 38.050894ms to wait for apiserver health ...
	I1202 18:50:56.005281    5489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 18:50:56.019140    5489 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 18:50:56.019171    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:56.027654    5489 system_pods.go:59] 19 kube-system pods found
	I1202 18:50:56.027704    5489 system_pods.go:61] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.027711    5489 system_pods.go:61] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending
	I1202 18:50:56.027723    5489 system_pods.go:61] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending
	I1202 18:50:56.027735    5489 system_pods.go:61] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending
	I1202 18:50:56.027739    5489 system_pods.go:61] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.027743    5489 system_pods.go:61] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.027760    5489 system_pods.go:61] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.027765    5489 system_pods.go:61] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.027778    5489 system_pods.go:61] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.027783    5489 system_pods.go:61] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.027788    5489 system_pods.go:61] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.027794    5489 system_pods.go:61] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.027803    5489 system_pods.go:61] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending
	I1202 18:50:56.027816    5489 system_pods.go:61] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending
	I1202 18:50:56.027827    5489 system_pods.go:61] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.027844    5489 system_pods.go:61] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending
	I1202 18:50:56.027857    5489 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.027867    5489 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending
	I1202 18:50:56.027873    5489 system_pods.go:61] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending
	I1202 18:50:56.027879    5489 system_pods.go:74] duration metric: took 22.592436ms to wait for pod list to return data ...
	I1202 18:50:56.027891    5489 default_sa.go:34] waiting for default service account to be created ...
	I1202 18:50:56.039291    5489 default_sa.go:45] found service account: "default"
	I1202 18:50:56.039353    5489 default_sa.go:55] duration metric: took 11.452454ms for default service account to be created ...
	I1202 18:50:56.039417    5489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 18:50:56.050531    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.050581    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.050591    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.050596    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending
	I1202 18:50:56.050600    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending
	I1202 18:50:56.050604    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.050609    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.050616    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.050621    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.050635    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.050640    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.050645    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.050664    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.050681    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending
	I1202 18:50:56.050686    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending
	I1202 18:50:56.050692    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.050696    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending
	I1202 18:50:56.050703    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.050714    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending
	I1202 18:50:56.050718    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending
	I1202 18:50:56.050741    5489 retry.go:31] will retry after 205.871252ms: missing components: kube-dns
	I1202 18:50:56.143193    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:56.143335    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:56.228586    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:56.284384    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.284464    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.284490    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.284527    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.284554    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.284576    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.284600    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.284632    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.284655    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.284676    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.284694    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.284714    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.284746    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.284772    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.284793    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.284814    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.284848    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.284873    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.284895    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.284916    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.284958    5489 retry.go:31] will retry after 250.577982ms: missing components: kube-dns
	I1202 18:50:56.495272    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:56.598015    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.598122    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.598175    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.598231    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.598273    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.598322    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.598361    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.598381    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.598404    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.598454    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.598489    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.598514    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.598541    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.598580    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.598620    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.598642    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.598664    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.598720    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.598755    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.598797    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.598840    5489 retry.go:31] will retry after 368.305825ms: missing components: kube-dns
	I1202 18:50:56.692788    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:56.693510    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:56.722445    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:56.972248    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:56.972287    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:56.972296    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:56.972306    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:56.972312    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:56.972317    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:56.972323    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:56.972327    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:56.972332    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:56.972338    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:56.972346    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:56.972351    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:56.972361    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:56.972369    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:56.972379    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:56.972385    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:56.972391    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:56.972400    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.972406    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:56.972412    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:56.972426    5489 retry.go:31] will retry after 501.793123ms: missing components: kube-dns
	I1202 18:50:56.987494    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:57.151379    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:57.151759    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:57.222529    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:57.479951    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:57.479988    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:50:57.479999    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:57.480008    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:57.480017    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:57.480027    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:57.480032    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:57.480040    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:57.480045    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:57.480053    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:57.480058    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:57.480065    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:57.480071    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:57.480077    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:57.480086    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:57.480092    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:57.480104    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:57.480110    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:57.480119    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:57.480128    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:50:57.480142    5489 retry.go:31] will retry after 503.085502ms: missing components: kube-dns
	I1202 18:50:57.488765    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:57.643136    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:57.643344    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:57.742782    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.006392    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:58.007061    5489 system_pods.go:86] 19 kube-system pods found
	I1202 18:50:58.007089    5489 system_pods.go:89] "coredns-66bc5c9577-khwqf" [27d70f22-cf5d-4707-8bf8-81cec3804f5c] Running
	I1202 18:50:58.007108    5489 system_pods.go:89] "csi-hostpath-attacher-0" [84c51807-2e8e-48c7-9ee3-a0bfddd511d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 18:50:58.007121    5489 system_pods.go:89] "csi-hostpath-resizer-0" [ebfbf650-c1ef-4675-b5b6-ab3dd5ebc8f0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 18:50:58.007131    5489 system_pods.go:89] "csi-hostpathplugin-gdz4d" [75b66d2c-1163-48c8-8a52-97050403e4e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 18:50:58.007140    5489 system_pods.go:89] "etcd-addons-391119" [62a88fcc-4258-47cd-a85c-6fe0aca7d914] Running
	I1202 18:50:58.007144    5489 system_pods.go:89] "kindnet-zszgk" [442b0797-9f93-4715-9635-ad3731d09bce] Running
	I1202 18:50:58.007149    5489 system_pods.go:89] "kube-apiserver-addons-391119" [1e306c15-43d8-4499-9291-6efb085df524] Running
	I1202 18:50:58.007154    5489 system_pods.go:89] "kube-controller-manager-addons-391119" [879f7b5c-e44a-4fb3-819b-b09b635fd372] Running
	I1202 18:50:58.007165    5489 system_pods.go:89] "kube-ingress-dns-minikube" [c9c08a0e-4c1e-4932-9ee1-efceac860993] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 18:50:58.007169    5489 system_pods.go:89] "kube-proxy-z4z6m" [8c5aff36-d77a-4a6b-bfee-952ae61ed4c5] Running
	I1202 18:50:58.007174    5489 system_pods.go:89] "kube-scheduler-addons-391119" [c8718db5-6deb-4fb8-b172-a15419664d7f] Running
	I1202 18:50:58.007180    5489 system_pods.go:89] "metrics-server-85b7d694d7-8qm5c" [cf80304c-b73c-4c15-9110-1feb0e9f65c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 18:50:58.007186    5489 system_pods.go:89] "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 18:50:58.007195    5489 system_pods.go:89] "registry-6b586f9694-sb27k" [24d08712-9500-4926-9722-bab19e2b91ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 18:50:58.007201    5489 system_pods.go:89] "registry-creds-764b6fb674-nvw8r" [2b0ede8c-f96f-4372-bdb4-19fc781427c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 18:50:58.007212    5489 system_pods.go:89] "registry-proxy-8cmtn" [9d20641c-df13-4599-a104-518a43ba5eb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 18:50:58.007217    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8x8cp" [dd95c9b2-67d8-448d-9f0c-bce2ce52187c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:58.007224    5489 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gmcbm" [17acb780-3c1b-40d3-a2b7-3c8614597e7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 18:50:58.007233    5489 system_pods.go:89] "storage-provisioner" [b8985d56-d767-4f32-a1ed-f96bdb89c289] Running
	I1202 18:50:58.007241    5489 system_pods.go:126] duration metric: took 1.967812019s to wait for k8s-apps to be running ...
	I1202 18:50:58.007255    5489 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 18:50:58.007306    5489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 18:50:58.023186    5489 system_svc.go:56] duration metric: took 15.918666ms WaitForService to wait for kubelet
	I1202 18:50:58.023218    5489 kubeadm.go:587] duration metric: took 43.539081571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:50:58.023239    5489 node_conditions.go:102] verifying NodePressure condition ...
	I1202 18:50:58.026932    5489 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 18:50:58.026966    5489 node_conditions.go:123] node cpu capacity is 2
	I1202 18:50:58.026983    5489 node_conditions.go:105] duration metric: took 3.738071ms to run NodePressure ...
	I1202 18:50:58.026996    5489 start.go:242] waiting for startup goroutines ...
	I1202 18:50:58.142069    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:58.142504    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:58.223134    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.489127    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:58.642566    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:58.642824    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:58.723172    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:58.988075    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:59.141046    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:59.141582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:59.222449    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:59.488242    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:50:59.641430    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:50:59.642112    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:50:59.722928    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:50:59.988798    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:00.177671    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:00.180268    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:00.234017    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:00.489809    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:00.644231    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:00.644809    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:00.743709    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:00.988378    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:01.142614    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:01.143791    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:01.223846    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:01.490453    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:01.642585    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:01.643291    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:01.724244    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:01.988942    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:02.141351    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:02.141477    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:02.223468    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:02.488183    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:02.646274    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:02.646796    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:02.742436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:02.987628    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:03.139595    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:03.140970    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:03.223271    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:03.488863    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:03.640961    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:03.641079    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:03.722661    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:03.987580    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:04.142365    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:04.142956    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:04.223461    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:04.488293    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:04.660718    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:04.667031    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:04.757016    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:04.988663    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:05.140629    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:05.140757    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:05.222693    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:05.489399    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:05.640819    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:05.641500    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:05.722695    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:05.988811    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:06.145492    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:06.145941    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:06.223506    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:06.488311    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:06.654707    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:06.654792    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:06.726255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:06.988675    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:07.141235    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:07.142449    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:07.226839    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:07.488442    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:07.644448    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:07.645035    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:07.727502    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:07.990619    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:08.147142    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:08.147697    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:08.224944    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:08.491009    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:08.644036    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:08.644411    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:08.725873    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:08.988692    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:09.142412    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:09.144270    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:09.223645    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:09.488275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:09.642271    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:09.642680    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:09.722571    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:09.988807    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:10.141019    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:10.141563    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:10.222342    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:10.487283    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:10.641105    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:10.641305    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:10.741537    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:10.988931    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:11.142978    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:11.144643    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:11.223014    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:11.489417    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:11.641281    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:11.641735    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:11.741307    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:11.988241    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:12.142114    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:12.142423    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:12.223364    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:12.489212    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:12.641561    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:12.641858    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:12.722366    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:12.989151    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:13.142350    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:13.142554    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:13.244188    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:13.488457    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:13.642158    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:13.642664    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:13.724230    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:13.988339    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:14.141425    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:14.142035    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:14.223255    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:14.488409    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:14.640840    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:14.641043    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:14.723038    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:14.988859    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:15.140265    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:15.140527    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:15.222724    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:15.488310    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:15.647460    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:15.647593    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:15.723075    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:15.989412    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:16.139739    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:16.140917    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:16.223299    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:16.488405    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:16.641082    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:16.641254    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:16.727673    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:16.987830    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:17.140804    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:17.141336    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:17.223022    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:17.488623    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:17.641954    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:17.642004    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:17.723102    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:17.988321    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:18.142459    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:18.142787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:18.222896    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:18.489251    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:18.640518    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:18.642242    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:18.724076    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:18.989102    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:19.139721    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:19.141287    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:19.223313    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:19.489126    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:19.641855    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:19.642200    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:19.723609    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:19.988132    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:20.145249    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:20.145560    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:20.222777    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:20.488467    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:20.645468    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:20.645819    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:20.731529    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:20.995534    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:21.140705    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:21.141046    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:21.223049    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:21.487719    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:21.640603    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:21.640857    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:21.734417    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:21.987594    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:22.139875    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:22.140532    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:22.223283    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:22.487914    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:22.641468    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:22.641610    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:22.722681    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:22.990394    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:23.140500    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:23.140655    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:23.222424    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:23.487342    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:23.641034    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:23.641059    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:23.722827    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:23.988678    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:24.139780    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:24.141918    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:24.223095    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:24.489405    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:24.642133    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:24.642504    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:24.722425    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:24.987972    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:25.142254    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:25.142426    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:25.223275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:25.488023    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:25.641188    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:25.641494    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:25.722486    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:25.988361    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:26.140631    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:26.140784    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:26.222972    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:26.488547    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:26.640926    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:26.641461    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 18:51:26.722528    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:26.988053    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:27.140714    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:27.141495    5489 kapi.go:107] duration metric: took 1m6.504930755s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 18:51:27.222119    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:27.488202    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:27.640592    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:27.722334    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:27.987885    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:28.141492    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:28.223388    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:28.487842    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:28.641530    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:28.722658    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:28.988677    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:29.140867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:29.222582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:29.487787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:29.641150    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:29.722935    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:29.988215    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:30.140724    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:30.223117    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:30.489275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:30.641437    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:30.723898    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:30.988318    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:31.140601    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:31.223108    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:31.488170    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:31.640546    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:31.722787    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:31.988600    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:32.140840    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:32.222661    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:32.488450    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:32.643857    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:32.723224    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:32.989511    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:33.141026    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:33.223455    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:33.487656    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:33.641083    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:33.723186    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:33.989238    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:34.141737    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:34.242753    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:34.490083    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:34.641497    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:34.722347    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:34.987936    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:35.140867    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:35.223203    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:35.495123    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:35.641642    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:35.722502    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:35.988101    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:36.141606    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:36.222966    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:36.488414    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:36.640843    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:36.723164    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:36.989296    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:37.140275    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:37.223451    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:37.488436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:37.640654    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:37.723000    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:37.989110    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:38.144785    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:38.222869    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:38.488566    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:38.641041    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:38.740905    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:38.989088    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:39.142077    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:39.223320    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:39.487849    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:39.640966    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:39.726219    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:39.994244    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:40.142247    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:40.223916    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:40.488727    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:40.641506    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:40.723554    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:40.990745    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:41.140952    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:41.223376    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:41.489750    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:41.640925    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:41.722637    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:41.988038    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:42.144983    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:42.224280    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:42.488876    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:42.641008    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:42.723420    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:42.988842    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:43.141251    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:43.223202    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:43.488573    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:43.641037    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:43.723134    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:43.988817    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:44.140892    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:44.223101    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:44.488906    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:44.641303    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:44.723928    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:44.989811    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:45.142353    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:45.228800    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:45.488264    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:45.642105    5489 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 18:51:45.724163    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:45.988136    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:46.142608    5489 kapi.go:107] duration metric: took 1m25.505262125s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 18:51:46.222583    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:46.487957    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:46.722802    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:46.988731    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:47.223171    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:47.489434    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:47.722582    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:47.988436    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:48.224081    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:48.488881    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:48.723653    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 18:51:48.992852    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:49.223077    5489 kapi.go:107] duration metric: took 1m25.003485392s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 18:51:49.226325    5489 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-391119 cluster.
	I1202 18:51:49.229067    5489 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 18:51:49.231892    5489 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 18:51:49.488596    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:49.989229    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:50.487388    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:50.987773    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:51.489671    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:51.990150    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:52.489011    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:52.988275    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:53.488822    5489 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 18:51:53.990165    5489 kapi.go:107] duration metric: took 1m33.005714318s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 18:51:53.993313    5489 out.go:179] * Enabled addons: nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, storage-provisioner, registry-creds, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1202 18:51:53.996264    5489 addons.go:530] duration metric: took 1m39.511594272s for enable addons: enabled=[nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin storage-provisioner registry-creds cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1202 18:51:53.996319    5489 start.go:247] waiting for cluster config update ...
	I1202 18:51:53.996345    5489 start.go:256] writing updated cluster config ...
	I1202 18:51:53.996623    5489 ssh_runner.go:195] Run: rm -f paused
	I1202 18:51:54.001197    5489 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:51:54.004648    5489 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khwqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.013478    5489 pod_ready.go:94] pod "coredns-66bc5c9577-khwqf" is "Ready"
	I1202 18:51:54.013520    5489 pod_ready.go:86] duration metric: took 8.843388ms for pod "coredns-66bc5c9577-khwqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.035334    5489 pod_ready.go:83] waiting for pod "etcd-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.043889    5489 pod_ready.go:94] pod "etcd-addons-391119" is "Ready"
	I1202 18:51:54.043929    5489 pod_ready.go:86] duration metric: took 8.565675ms for pod "etcd-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.047583    5489 pod_ready.go:83] waiting for pod "kube-apiserver-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.061131    5489 pod_ready.go:94] pod "kube-apiserver-addons-391119" is "Ready"
	I1202 18:51:54.061193    5489 pod_ready.go:86] duration metric: took 13.582442ms for pod "kube-apiserver-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.089065    5489 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.404757    5489 pod_ready.go:94] pod "kube-controller-manager-addons-391119" is "Ready"
	I1202 18:51:54.404834    5489 pod_ready.go:86] duration metric: took 315.742493ms for pod "kube-controller-manager-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:54.605966    5489 pod_ready.go:83] waiting for pod "kube-proxy-z4z6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.004956    5489 pod_ready.go:94] pod "kube-proxy-z4z6m" is "Ready"
	I1202 18:51:55.005029    5489 pod_ready.go:86] duration metric: took 399.030605ms for pod "kube-proxy-z4z6m" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.205452    5489 pod_ready.go:83] waiting for pod "kube-scheduler-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.605702    5489 pod_ready.go:94] pod "kube-scheduler-addons-391119" is "Ready"
	I1202 18:51:55.605728    5489 pod_ready.go:86] duration metric: took 400.248478ms for pod "kube-scheduler-addons-391119" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:51:55.605741    5489 pod_ready.go:40] duration metric: took 1.604512353s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:51:55.994032    5489 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 18:51:55.999587    5489 out.go:179] * Done! kubectl is now configured to use "addons-391119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 18:51:53 addons-391119 crio[831]: time="2025-12-02T18:51:53.688881197Z" level=info msg="Created container 08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5: kube-system/csi-hostpathplugin-gdz4d/csi-snapshotter" id=2fefe35a-2420-4e70-b8d2-bf54906c7f63 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 18:51:53 addons-391119 crio[831]: time="2025-12-02T18:51:53.689975043Z" level=info msg="Starting container: 08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5" id=dcd80d71-642c-4805-acf6-2ab599219c58 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 18:51:53 addons-391119 crio[831]: time="2025-12-02T18:51:53.693194117Z" level=info msg="Started container" PID=4806 containerID=08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5 description=kube-system/csi-hostpathplugin-gdz4d/csi-snapshotter id=dcd80d71-642c-4805-acf6-2ab599219c58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=604f66f3a387c58e1ebda93b15c80b2257b8168c4254f1e6cf6a5b27f9f2ba41
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.481250829Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4cd988f3-3c3f-416f-88e4-131f9cd03e39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.48133904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.494157402Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8231900e582f7abe2b41b67da919e7366d3932daae515fe5c17893650fd92735 UID:9e10da12-ac5a-4af7-9fd2-88eea49a93f1 NetNS:/var/run/netns/3ba0f112-86c5-4d8e-b48e-e2f750f743b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000128f38}] Aliases:map[]}"
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.494210833Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.509071766Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8231900e582f7abe2b41b67da919e7366d3932daae515fe5c17893650fd92735 UID:9e10da12-ac5a-4af7-9fd2-88eea49a93f1 NetNS:/var/run/netns/3ba0f112-86c5-4d8e-b48e-e2f750f743b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000128f38}] Aliases:map[]}"
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.509214237Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.512119882Z" level=info msg="Ran pod sandbox 8231900e582f7abe2b41b67da919e7366d3932daae515fe5c17893650fd92735 with infra container: default/busybox/POD" id=4cd988f3-3c3f-416f-88e4-131f9cd03e39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.516800106Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fca0f059-1fca-4184-ae67-dc3de85cfae2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.517072676Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fca0f059-1fca-4184-ae67-dc3de85cfae2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.517132022Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fca0f059-1fca-4184-ae67-dc3de85cfae2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.518066963Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b56bd83d-8e33-48b8-9b83-374b3fceaa87 name=/runtime.v1.ImageService/PullImage
	Dec 02 18:51:57 addons-391119 crio[831]: time="2025-12-02T18:51:57.519819102Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.436570539Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b56bd83d-8e33-48b8-9b83-374b3fceaa87 name=/runtime.v1.ImageService/PullImage
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.4374907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=200fc48e-56a6-4297-ad92-84f8f886b39a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.439250483Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c99ac572-5a6f-4e05-9f0c-130a40b959c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.44465777Z" level=info msg="Creating container: default/busybox/busybox" id=bdc84716-d43e-45fd-a0df-56d3f4ba0822 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.444779636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.451475184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.451976973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.471266553Z" level=info msg="Created container da3aa953a1e449ee00217ec803e1a2dd9467fcb5a6e918b5e13834d80c552981: default/busybox/busybox" id=bdc84716-d43e-45fd-a0df-56d3f4ba0822 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.472134302Z" level=info msg="Starting container: da3aa953a1e449ee00217ec803e1a2dd9467fcb5a6e918b5e13834d80c552981" id=7194853c-1688-4cca-aae3-4d9c8b1e15d6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 18:51:59 addons-391119 crio[831]: time="2025-12-02T18:51:59.478127051Z" level=info msg="Started container" PID=4903 containerID=da3aa953a1e449ee00217ec803e1a2dd9467fcb5a6e918b5e13834d80c552981 description=default/busybox/busybox id=7194853c-1688-4cca-aae3-4d9c8b1e15d6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8231900e582f7abe2b41b67da919e7366d3932daae515fe5c17893650fd92735
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	da3aa953a1e44       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   8231900e582f7       busybox                                    default
	08bf95d396b25       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	e9ec8143d3c3b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	4936dfcaa8f43       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	667b638fd852e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           18 seconds ago       Running             hostpath                                 0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	7bf7410b1e128       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 20 seconds ago       Running             gcp-auth                                 0                   09fca3aeab6fe       gcp-auth-78565c9fb4-846nn                  gcp-auth
	16415d39f3c1d       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             23 seconds ago       Running             controller                               0                   cabac100f1519       ingress-nginx-controller-6c8bf45fb-wxm2s   ingress-nginx
	ece6af85dd624       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            30 seconds ago       Running             gadget                                   0                   84ca873e22231       gadget-htrz9                               gadget
	99c70d815e876       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                33 seconds ago       Running             node-driver-registrar                    0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	e58c1dd3e1586       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               34 seconds ago       Running             minikube-ingress-dns                     0                   321e6a28021f4       kube-ingress-dns-minikube                  kube-system
	4a404d74a7a80       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              42 seconds ago       Running             registry-proxy                           0                   334c1255b47d2       registry-proxy-8cmtn                       kube-system
	68e3c4d497333       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   45 seconds ago       Exited              patch                                    0                   71a0f8a89c4f6       ingress-nginx-admission-patch-fd76k        ingress-nginx
	f82a27f25fd27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   46 seconds ago       Exited              patch                                    0                   1fd92c1dc5ef2       gcp-auth-certs-patch-v98v6                 gcp-auth
	ede9106262c1d       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              46 seconds ago       Running             csi-resizer                              0                   469ec2121a19a       csi-hostpath-resizer-0                     kube-system
	f6a6e48ddc1aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   47 seconds ago       Exited              create                                   0                   5946b2ba34848       gcp-auth-certs-create-ppkrm                gcp-auth
	86afc0e10ae10       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     48 seconds ago       Running             nvidia-device-plugin-ctr                 0                   370e8a64064c8       nvidia-device-plugin-daemonset-jhzdp       kube-system
	56226ccdee89d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   51 seconds ago       Exited              create                                   0                   0ffcdd2163447       ingress-nginx-admission-create-hbhz6       ingress-nginx
	eebd27b0fd019       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   54 seconds ago       Running             csi-external-health-monitor-controller   0                   604f66f3a387c       csi-hostpathplugin-gdz4d                   kube-system
	efde85fa7a639       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   5b4e9f5d11f3a       snapshot-controller-7d9fbc56b8-8x8cp       kube-system
	e46cf04322b8a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             55 seconds ago       Running             csi-attacher                             0                   5e86465690254       csi-hostpath-attacher-0                    kube-system
	19488e1fccb37       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             57 seconds ago       Running             local-path-provisioner                   0                   587c230687612       local-path-provisioner-648f6765c9-dqshj    local-path-storage
	f9dabed489849       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              58 seconds ago       Running             yakd                                     0                   44b6efeca2b22       yakd-dashboard-5ff678cb9-9rt6b             yakd-dashboard
	c996f159fc220       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   498e2ed1789e6       metrics-server-85b7d694d7-8qm5c            kube-system
	ec8ebe2000d71       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   e298b111743b1       registry-6b586f9694-sb27k                  kube-system
	3d93f44570f5d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   360ea997f74a2       snapshot-controller-7d9fbc56b8-gmcbm       kube-system
	ded683f7ffbb6       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   1576a4d6b3588       cloud-spanner-emulator-5bdddb765-xl6n8     default
	a49e8bf8b4a18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   bf26e6271b773       storage-provisioner                        kube-system
	1c15eef657852       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   306193e1cd27c       coredns-66bc5c9577-khwqf                   kube-system
	560101125bfd0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   5aaec64c2a130       kindnet-zszgk                              kube-system
	35dda71f1d492       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   11499283350da       kube-proxy-z4z6m                           kube-system
	8e8e87c9645a2       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   8b237cd443d89       kube-apiserver-addons-391119               kube-system
	7c076f35e9904       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   2d297d0d7bbf8       kube-scheduler-addons-391119               kube-system
	0ebf58658f3b8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   41af2a338e6d1       etcd-addons-391119                         kube-system
	c2d0298aacf21       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   272e4608fdeb5       kube-controller-manager-addons-391119      kube-system
	
	
	==> coredns [1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa] <==
	[INFO] 10.244.0.18:50764 - 45673 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000057689s
	[INFO] 10.244.0.18:50764 - 22801 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002057241s
	[INFO] 10.244.0.18:50764 - 39341 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002093508s
	[INFO] 10.244.0.18:50764 - 7680 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000150562s
	[INFO] 10.244.0.18:50764 - 35129 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088047s
	[INFO] 10.244.0.18:40704 - 2959 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172042s
	[INFO] 10.244.0.18:40704 - 4214 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221904s
	[INFO] 10.244.0.18:52872 - 55205 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000153178s
	[INFO] 10.244.0.18:52872 - 55394 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000280307s
	[INFO] 10.244.0.18:42262 - 42970 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095555s
	[INFO] 10.244.0.18:42262 - 42517 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111391s
	[INFO] 10.244.0.18:43682 - 10228 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001343065s
	[INFO] 10.244.0.18:43682 - 10416 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001756095s
	[INFO] 10.244.0.18:56430 - 47188 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108732s
	[INFO] 10.244.0.18:56430 - 47583 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262296s
	[INFO] 10.244.0.21:48666 - 54747 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000278034s
	[INFO] 10.244.0.21:34649 - 14954 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000224226s
	[INFO] 10.244.0.21:46258 - 64870 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00023554s
	[INFO] 10.244.0.21:60819 - 921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00036184s
	[INFO] 10.244.0.21:60574 - 13583 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013562s
	[INFO] 10.244.0.21:59120 - 20787 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090435s
	[INFO] 10.244.0.21:50478 - 58800 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003074979s
	[INFO] 10.244.0.21:59277 - 64765 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003331376s
	[INFO] 10.244.0.21:33542 - 23755 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000608432s
	[INFO] 10.244.0.21:49064 - 64383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0046589s
	
	
	==> describe nodes <==
	Name:               addons-391119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-391119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=addons-391119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T18_50_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-391119
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-391119"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 18:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-391119
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 18:52:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 18:51:41 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 18:51:41 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 18:51:41 +0000   Tue, 02 Dec 2025 18:50:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 18:51:41 +0000   Tue, 02 Dec 2025 18:50:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-391119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                f89a01d6-7158-41c3-94b9-c90bb28284d1
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-xl6n8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  gadget                      gadget-htrz9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  gcp-auth                    gcp-auth-78565c9fb4-846nn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-wxm2s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         108s
	  kube-system                 coredns-66bc5c9577-khwqf                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 csi-hostpathplugin-gdz4d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 etcd-addons-391119                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-zszgk                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-addons-391119                250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-addons-391119       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-z4z6m                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-addons-391119                100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 metrics-server-85b7d694d7-8qm5c             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         110s
	  kube-system                 nvidia-device-plugin-daemonset-jhzdp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 registry-6b586f9694-sb27k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 registry-creds-764b6fb674-nvw8r             0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 registry-proxy-8cmtn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 snapshot-controller-7d9fbc56b8-8x8cp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 snapshot-controller-7d9fbc56b8-gmcbm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  local-path-storage          local-path-provisioner-648f6765c9-dqshj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9rt6b              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 113s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node addons-391119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node addons-391119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node addons-391119 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  119s                 kubelet          Node addons-391119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s                 kubelet          Node addons-391119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s                 kubelet          Node addons-391119 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     115s                 cidrAllocator    Node addons-391119 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           115s                 node-controller  Node addons-391119 event: Registered Node addons-391119 in Controller
	  Normal   NodeReady                73s                  kubelet          Node addons-391119 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c] <==
	{"level":"warn","ts":"2025-12-02T18:50:04.594444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.601739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.650721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.721009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.754509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.786604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.822251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.862181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.888045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.916527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.949310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:04.992315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.068088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.133749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.194299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.231304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.253845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:05.334227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:21.253398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:21.269587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.293029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.307847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.344292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:50:43.360028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54092","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [7bf7410b1e12852eba266f9783b75c5daa87d1cb4461c783399415a158482592] <==
	2025/12/02 18:51:48 GCP Auth Webhook started!
	2025/12/02 18:51:56 Ready to marshal response ...
	2025/12/02 18:51:56 Ready to write response ...
	2025/12/02 18:51:57 Ready to marshal response ...
	2025/12/02 18:51:57 Ready to write response ...
	2025/12/02 18:51:57 Ready to marshal response ...
	2025/12/02 18:51:57 Ready to write response ...
	
	
	==> kernel <==
	 18:52:09 up 34 min,  0 user,  load average: 2.47, 1.21, 0.48
	Linux addons-391119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026] <==
	I1202 18:50:15.252694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 18:50:45.242624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 18:50:45.248646       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 18:50:45.249112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 18:50:45.249786       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1202 18:50:46.543695       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 18:50:46.543729       1 metrics.go:72] Registering metrics
	I1202 18:50:46.543799       1 controller.go:711] "Syncing nftables rules"
	E1202 18:50:46.544197       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1202 18:50:55.248170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:50:55.248212       1 main.go:301] handling current node
	I1202 18:51:05.241723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:05.241789       1 main.go:301] handling current node
	I1202 18:51:15.241739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:15.241767       1 main.go:301] handling current node
	I1202 18:51:25.242115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:25.242150       1 main.go:301] handling current node
	I1202 18:51:35.242570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:35.242599       1 main.go:301] handling current node
	I1202 18:51:45.243877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:45.243916       1 main.go:301] handling current node
	I1202 18:51:55.241778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:51:55.241826       1 main.go:301] handling current node
	I1202 18:52:05.245756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:52:05.245788       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b] <==
	E1202 18:51:18.703313       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 18:51:18.704441       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.73.76:443: connect: connection refused" logger="UnhandledError"
	E1202 18:51:18.705238       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.73.76:443: connect: connection refused" logger="UnhandledError"
	E1202 18:51:18.710593       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.73.76:443: connect: connection refused" logger="UnhandledError"
	W1202 18:51:19.699674       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:19.699730       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 18:51:19.699743       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 18:51:19.699841       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:19.699915       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 18:51:19.700990       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 18:51:23.745303       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 18:51:23.745395       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.73.76:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1202 18:51:23.745543       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 18:51:23.801297       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 18:52:06.525983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54906: use of closed network connection
	E1202 18:52:06.752384       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54938: use of closed network connection
	
	
	==> kube-controller-manager [c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc] <==
	I1202 18:50:13.323540       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-391119"
	I1202 18:50:13.323603       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1202 18:50:13.323369       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 18:50:13.324090       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 18:50:13.325407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 18:50:13.325480       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 18:50:13.325757       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 18:50:13.325852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 18:50:13.326169       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 18:50:13.326696       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 18:50:13.326862       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 18:50:13.328283       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 18:50:13.329498       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 18:50:13.330484       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	E1202 18:50:18.904759       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1202 18:50:43.286143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 18:50:43.286288       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1202 18:50:43.286338       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 18:50:43.327591       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1202 18:50:43.333515       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 18:50:43.387303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 18:50:43.434705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 18:50:58.333807       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1202 18:51:13.392282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 18:51:13.442812       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072] <==
	I1202 18:50:15.336168       1 server_linux.go:53] "Using iptables proxy"
	I1202 18:50:15.476483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 18:50:15.576607       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 18:50:15.576689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 18:50:15.576776       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 18:50:15.642400       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 18:50:15.642452       1 server_linux.go:132] "Using iptables Proxier"
	I1202 18:50:15.649225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 18:50:15.659590       1 server.go:527] "Version info" version="v1.34.2"
	I1202 18:50:15.659614       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:50:15.661987       1 config.go:200] "Starting service config controller"
	I1202 18:50:15.662000       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 18:50:15.662022       1 config.go:106] "Starting endpoint slice config controller"
	I1202 18:50:15.662026       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 18:50:15.662069       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 18:50:15.662075       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 18:50:15.666845       1 config.go:309] "Starting node config controller"
	I1202 18:50:15.666865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 18:50:15.666873       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 18:50:15.762102       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 18:50:15.762160       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 18:50:15.762174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753] <==
	I1202 18:50:07.522625       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:50:07.524933       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 18:50:07.525094       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:50:07.525115       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:50:07.525133       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 18:50:07.531617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 18:50:07.532396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 18:50:07.532542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 18:50:07.532644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 18:50:07.532689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 18:50:07.532932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 18:50:07.533005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 18:50:07.533074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 18:50:07.533122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 18:50:07.534612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 18:50:07.536591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 18:50:07.536689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 18:50:07.536747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 18:50:07.536787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 18:50:07.537610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 18:50:07.542342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 18:50:07.542580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 18:50:07.543680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 18:50:07.543694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1202 18:50:08.925739       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 18:51:24 addons-391119 kubelet[1273]: I1202 18:51:24.742649    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd92c1dc5ef26b9797be44fd5171a88cff4f42ef0020e01283c0c970090783c"
	Dec 02 18:51:25 addons-391119 kubelet[1273]: I1202 18:51:25.005768    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nq5t\" (UniqueName: \"kubernetes.io/projected/b6a6ef90-31bd-4e7d-9831-ab91b3c37d50-kube-api-access-8nq5t\") pod \"b6a6ef90-31bd-4e7d-9831-ab91b3c37d50\" (UID: \"b6a6ef90-31bd-4e7d-9831-ab91b3c37d50\") "
	Dec 02 18:51:25 addons-391119 kubelet[1273]: I1202 18:51:25.008009    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a6ef90-31bd-4e7d-9831-ab91b3c37d50-kube-api-access-8nq5t" (OuterVolumeSpecName: "kube-api-access-8nq5t") pod "b6a6ef90-31bd-4e7d-9831-ab91b3c37d50" (UID: "b6a6ef90-31bd-4e7d-9831-ab91b3c37d50"). InnerVolumeSpecName "kube-api-access-8nq5t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 18:51:25 addons-391119 kubelet[1273]: I1202 18:51:25.106668    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nq5t\" (UniqueName: \"kubernetes.io/projected/b6a6ef90-31bd-4e7d-9831-ab91b3c37d50-kube-api-access-8nq5t\") on node \"addons-391119\" DevicePath \"\""
	Dec 02 18:51:25 addons-391119 kubelet[1273]: I1202 18:51:25.764026    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a0f8a89c4f662c26986be466bd1ef0b968df5e3bdf844f37cba44c00e69b0d"
	Dec 02 18:51:26 addons-391119 kubelet[1273]: I1202 18:51:26.771347    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8cmtn" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:51:26 addons-391119 kubelet[1273]: I1202 18:51:26.796397    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-8cmtn" podStartSLOduration=2.262739934 podStartE2EDuration="31.796375568s" podCreationTimestamp="2025-12-02 18:50:55 +0000 UTC" firstStartedPulling="2025-12-02 18:50:56.748330711 +0000 UTC m=+47.668581726" lastFinishedPulling="2025-12-02 18:51:26.281966337 +0000 UTC m=+77.202217360" observedRunningTime="2025-12-02 18:51:26.79595357 +0000 UTC m=+77.716204593" watchObservedRunningTime="2025-12-02 18:51:26.796375568 +0000 UTC m=+77.716626583"
	Dec 02 18:51:27 addons-391119 kubelet[1273]: E1202 18:51:27.535219    1273 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 02 18:51:27 addons-391119 kubelet[1273]: E1202 18:51:27.535308    1273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0ede8c-f96f-4372-bdb4-19fc781427c9-gcr-creds podName:2b0ede8c-f96f-4372-bdb4-19fc781427c9 nodeName:}" failed. No retries permitted until 2025-12-02 18:51:59.53529154 +0000 UTC m=+110.455542554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2b0ede8c-f96f-4372-bdb4-19fc781427c9-gcr-creds") pod "registry-creds-764b6fb674-nvw8r" (UID: "2b0ede8c-f96f-4372-bdb4-19fc781427c9") : secret "registry-creds-gcr" not found
	Dec 02 18:51:27 addons-391119 kubelet[1273]: I1202 18:51:27.778460    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8cmtn" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 18:51:38 addons-391119 kubelet[1273]: I1202 18:51:38.902481    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-ingress-dns-minikube" podStartSLOduration=42.547382316 podStartE2EDuration="1m19.902454924s" podCreationTimestamp="2025-12-02 18:50:19 +0000 UTC" firstStartedPulling="2025-12-02 18:50:56.756818617 +0000 UTC m=+47.677069632" lastFinishedPulling="2025-12-02 18:51:34.111891225 +0000 UTC m=+85.032142240" observedRunningTime="2025-12-02 18:51:34.857197647 +0000 UTC m=+85.777448670" watchObservedRunningTime="2025-12-02 18:51:38.902454924 +0000 UTC m=+89.822705947"
	Dec 02 18:51:44 addons-391119 kubelet[1273]: I1202 18:51:44.533576    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-htrz9" podStartSLOduration=71.333169378 podStartE2EDuration="1m25.533558496s" podCreationTimestamp="2025-12-02 18:50:19 +0000 UTC" firstStartedPulling="2025-12-02 18:51:24.277984822 +0000 UTC m=+75.198235837" lastFinishedPulling="2025-12-02 18:51:38.47837394 +0000 UTC m=+89.398624955" observedRunningTime="2025-12-02 18:51:38.903048554 +0000 UTC m=+89.823299585" watchObservedRunningTime="2025-12-02 18:51:44.533558496 +0000 UTC m=+95.453809511"
	Dec 02 18:51:48 addons-391119 kubelet[1273]: I1202 18:51:48.954885    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-846nn" podStartSLOduration=64.597454803 podStartE2EDuration="1m24.954868416s" podCreationTimestamp="2025-12-02 18:50:24 +0000 UTC" firstStartedPulling="2025-12-02 18:51:28.120802662 +0000 UTC m=+79.041053677" lastFinishedPulling="2025-12-02 18:51:48.478216275 +0000 UTC m=+99.398467290" observedRunningTime="2025-12-02 18:51:48.954084291 +0000 UTC m=+99.874335314" watchObservedRunningTime="2025-12-02 18:51:48.954868416 +0000 UTC m=+99.875119431"
	Dec 02 18:51:48 addons-391119 kubelet[1273]: I1202 18:51:48.956272    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-wxm2s" podStartSLOduration=71.148811652 podStartE2EDuration="1m28.956258765s" podCreationTimestamp="2025-12-02 18:50:20 +0000 UTC" firstStartedPulling="2025-12-02 18:51:27.767058647 +0000 UTC m=+78.687309662" lastFinishedPulling="2025-12-02 18:51:45.57450576 +0000 UTC m=+96.494756775" observedRunningTime="2025-12-02 18:51:45.932996871 +0000 UTC m=+96.853247894" watchObservedRunningTime="2025-12-02 18:51:48.956258765 +0000 UTC m=+99.876509780"
	Dec 02 18:51:51 addons-391119 kubelet[1273]: I1202 18:51:51.401968    1273 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 02 18:51:51 addons-391119 kubelet[1273]: I1202 18:51:51.402025    1273 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 02 18:51:53 addons-391119 kubelet[1273]: I1202 18:51:53.243450    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad89e08-5cce-4010-9190-cecc5fec723e" path="/var/lib/kubelet/pods/2ad89e08-5cce-4010-9190-cecc5fec723e/volumes"
	Dec 02 18:51:53 addons-391119 kubelet[1273]: I1202 18:51:53.972250    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-gdz4d" podStartSLOduration=1.764186824 podStartE2EDuration="58.972231513s" podCreationTimestamp="2025-12-02 18:50:55 +0000 UTC" firstStartedPulling="2025-12-02 18:50:56.443336153 +0000 UTC m=+47.363587168" lastFinishedPulling="2025-12-02 18:51:53.651380842 +0000 UTC m=+104.571631857" observedRunningTime="2025-12-02 18:51:53.971257755 +0000 UTC m=+104.891508786" watchObservedRunningTime="2025-12-02 18:51:53.972231513 +0000 UTC m=+104.892482536"
	Dec 02 18:51:55 addons-391119 kubelet[1273]: I1202 18:51:55.243219    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b027c77-7044-4d9b-bf9d-44b84304335b" path="/var/lib/kubelet/pods/3b027c77-7044-4d9b-bf9d-44b84304335b/volumes"
	Dec 02 18:51:57 addons-391119 kubelet[1273]: I1202 18:51:57.203740    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgl6\" (UniqueName: \"kubernetes.io/projected/9e10da12-ac5a-4af7-9fd2-88eea49a93f1-kube-api-access-ltgl6\") pod \"busybox\" (UID: \"9e10da12-ac5a-4af7-9fd2-88eea49a93f1\") " pod="default/busybox"
	Dec 02 18:51:57 addons-391119 kubelet[1273]: I1202 18:51:57.203860    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e10da12-ac5a-4af7-9fd2-88eea49a93f1-gcp-creds\") pod \"busybox\" (UID: \"9e10da12-ac5a-4af7-9fd2-88eea49a93f1\") " pod="default/busybox"
	Dec 02 18:51:59 addons-391119 kubelet[1273]: E1202 18:51:59.617954    1273 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 02 18:51:59 addons-391119 kubelet[1273]: E1202 18:51:59.618047    1273 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0ede8c-f96f-4372-bdb4-19fc781427c9-gcr-creds podName:2b0ede8c-f96f-4372-bdb4-19fc781427c9 nodeName:}" failed. No retries permitted until 2025-12-02 18:53:03.618027298 +0000 UTC m=+174.538278313 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2b0ede8c-f96f-4372-bdb4-19fc781427c9-gcr-creds") pod "registry-creds-764b6fb674-nvw8r" (UID: "2b0ede8c-f96f-4372-bdb4-19fc781427c9") : secret "registry-creds-gcr" not found
	Dec 02 18:52:09 addons-391119 kubelet[1273]: I1202 18:52:09.207105    1273 scope.go:117] "RemoveContainer" containerID="f6a6e48ddc1aaa338045ec4284b99ebc23481dd4c36451609e92c03ef99aaec6"
	Dec 02 18:52:09 addons-391119 kubelet[1273]: I1202 18:52:09.218999    1273 scope.go:117] "RemoveContainer" containerID="f82a27f25fd2755d8b0e799296f9779d2c8df34f878f8f7e0d5edcaae1046970"
	
	
	==> storage-provisioner [a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17] <==
	W1202 18:51:44.992547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:46.996369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:47.001512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:49.010725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:49.024064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:51.027637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:51.040033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:53.048139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:53.067466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:55.070203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:55.075032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:57.078330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:57.086543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:59.089783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:51:59.094307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:01.097012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:01.103634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:03.106675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:03.111055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:05.113846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:05.120620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:07.127732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:07.138226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:09.141489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:52:09.146400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-391119 -n addons-391119
helpers_test.go:269: (dbg) Run:  kubectl --context addons-391119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r: exit status 1 (86.240566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hbhz6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fd76k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nvw8r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-391119 describe pod ingress-nginx-admission-create-hbhz6 ingress-nginx-admission-patch-fd76k registry-creds-764b6fb674-nvw8r: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable headlamp --alsologtostderr -v=1: exit status 11 (319.655926ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:10.244115   12056 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:10.244328   12056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:10.244342   12056 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:10.244347   12056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:10.244680   12056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:10.245028   12056 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:10.245479   12056 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:10.245502   12056 addons.go:622] checking whether the cluster is paused
	I1202 18:52:10.245649   12056 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:10.245705   12056 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:10.246244   12056 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:10.265261   12056 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:10.265349   12056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:10.283410   12056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:10.392328   12056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:10.392413   12056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:10.428241   12056 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:10.428263   12056 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:10.428268   12056 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:10.428273   12056 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:10.428276   12056 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:10.428282   12056 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:10.428285   12056 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:10.428288   12056 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:10.428292   12056 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:10.428298   12056 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:10.428302   12056 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:10.428305   12056 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:10.428308   12056 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:10.428311   12056 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:10.428314   12056 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:10.428320   12056 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:10.428324   12056 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:10.428331   12056 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:10.428334   12056 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:10.428338   12056 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:10.428343   12056 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:10.428346   12056 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:10.428349   12056 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:10.428353   12056 cri.go:89] found id: ""
	I1202 18:52:10.428408   12056 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:10.452515   12056 out.go:203] 
	W1202 18:52:10.455545   12056 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:10.455574   12056 out.go:285] * 
	* 
	W1202 18:52:10.508413   12056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:10.511466   12056 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-xl6n8" [8d08e0d9-1301-4379-8ce1-874dd41ee5e0] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002864765s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (242.349346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:28.775957   12529 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:28.776106   12529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:28.776112   12529 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:28.776116   12529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:28.776531   12529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:28.776877   12529 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:28.777978   12529 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:28.778023   12529 addons.go:622] checking whether the cluster is paused
	I1202 18:52:28.778180   12529 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:28.778212   12529 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:28.778807   12529 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:28.795171   12529 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:28.795224   12529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:28.812474   12529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:28.916016   12529 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:28.916098   12529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:28.944497   12529 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:28.944521   12529 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:28.944526   12529 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:28.944529   12529 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:28.944533   12529 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:28.944537   12529 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:28.944540   12529 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:28.944543   12529 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:28.944546   12529 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:28.944577   12529 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:28.944581   12529 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:28.944584   12529 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:28.944588   12529 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:28.944595   12529 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:28.944598   12529 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:28.944606   12529 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:28.944613   12529 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:28.944618   12529 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:28.944621   12529 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:28.944623   12529 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:28.944647   12529 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:28.944652   12529 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:28.944655   12529 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:28.944659   12529 cri.go:89] found id: ""
	I1202 18:52:28.944711   12529 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:28.960557   12529 out.go:203] 
	W1202 18:52:28.963482   12529 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:28.963506   12529 out.go:285] * 
	* 
	W1202 18:52:28.968221   12529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:28.971249   12529 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-391119 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-391119 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [73f55e0f-a35c-475a-b5c5-51fe83723ada] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [73f55e0f-a35c-475a-b5c5-51fe83723ada] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [73f55e0f-a35c-475a-b5c5-51fe83723ada] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003888014s
addons_test.go:967: (dbg) Run:  kubectl --context addons-391119 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 ssh "cat /opt/local-path-provisioner/pvc-d9b26da9-ba59-4d1e-8d9e-2c2373daa6ce_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-391119 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-391119 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (263.454338ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:30.236755   12684 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:30.236983   12684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:30.236997   12684 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:30.237004   12684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:30.237308   12684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:30.237623   12684 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:30.238049   12684 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:30.238071   12684 addons.go:622] checking whether the cluster is paused
	I1202 18:52:30.238215   12684 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:30.238232   12684 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:30.238783   12684 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:30.258626   12684 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:30.258685   12684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:30.280638   12684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:30.388104   12684 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:30.388212   12684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:30.421164   12684 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:30.421189   12684 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:30.421194   12684 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:30.421198   12684 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:30.421201   12684 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:30.421206   12684 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:30.421209   12684 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:30.421212   12684 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:30.421215   12684 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:30.421223   12684 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:30.421226   12684 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:30.421229   12684 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:30.421233   12684 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:30.421236   12684 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:30.421239   12684 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:30.421251   12684 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:30.421254   12684 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:30.421258   12684 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:30.421261   12684 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:30.421264   12684 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:30.421274   12684 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:30.421278   12684 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:30.421282   12684 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:30.421285   12684 cri.go:89] found id: ""
	I1202 18:52:30.421340   12684 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:30.436776   12684 out.go:203] 
	W1202 18:52:30.439724   12684 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:30.439783   12684 out.go:285] * 
	* 
	W1202 18:52:30.446165   12684 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:30.449795   12684 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jhzdp" [920f250b-9211-45ac-9e8d-f89c768a2102] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003812156s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (269.373002ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:21.839475   12234 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:21.839685   12234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:21.839698   12234 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:21.839705   12234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:21.840005   12234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:21.840322   12234 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:21.840757   12234 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:21.840778   12234 addons.go:622] checking whether the cluster is paused
	I1202 18:52:21.840929   12234 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:21.840943   12234 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:21.841582   12234 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:21.859425   12234 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:21.859488   12234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:21.877843   12234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:21.984050   12234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:21.984136   12234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:22.011990   12234 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:22.012017   12234 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:22.012021   12234 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:22.012025   12234 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:22.012028   12234 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:22.012032   12234 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:22.012035   12234 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:22.012038   12234 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:22.012041   12234 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:22.012047   12234 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:22.012050   12234 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:22.012054   12234 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:22.012057   12234 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:22.012059   12234 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:22.012062   12234 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:22.012068   12234 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:22.012072   12234 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:22.012075   12234 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:22.012078   12234 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:22.012085   12234 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:22.012090   12234 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:22.012113   12234 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:22.012118   12234 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:22.012121   12234 cri.go:89] found id: ""
	I1202 18:52:22.012196   12234 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:22.030413   12234 out.go:203] 
	W1202 18:52:22.033414   12234 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:22.033450   12234 out.go:285] * 
	* 
	W1202 18:52:22.038451   12234 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:22.041806   12234 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9rt6b" [b71562c7-b386-4e5e-a736-2171b84859a5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003809601s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-391119 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-391119 addons disable yakd --alsologtostderr -v=1: exit status 11 (252.470311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 18:52:15.561855   12120 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:52:15.562086   12120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:15.562104   12120 out.go:374] Setting ErrFile to fd 2...
	I1202 18:52:15.562111   12120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:52:15.562578   12120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:52:15.563027   12120 mustload.go:66] Loading cluster: addons-391119
	I1202 18:52:15.563909   12120 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:15.563930   12120 addons.go:622] checking whether the cluster is paused
	I1202 18:52:15.564085   12120 config.go:182] Loaded profile config "addons-391119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:52:15.564097   12120 host.go:66] Checking if "addons-391119" exists ...
	I1202 18:52:15.565564   12120 cli_runner.go:164] Run: docker container inspect addons-391119 --format={{.State.Status}}
	I1202 18:52:15.584342   12120 ssh_runner.go:195] Run: systemctl --version
	I1202 18:52:15.584391   12120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-391119
	I1202 18:52:15.602266   12120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/addons-391119/id_rsa Username:docker}
	I1202 18:52:15.716573   12120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:52:15.716675   12120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:52:15.743675   12120 cri.go:89] found id: "08bf95d396b25f3eb3d689ab72cea8ca7a62a336e965dc06505e9b58ea5090a5"
	I1202 18:52:15.743696   12120 cri.go:89] found id: "e9ec8143d3c3b40d5b96a1b741fe8181d2a8120d159e22c7312e783660664048"
	I1202 18:52:15.743701   12120 cri.go:89] found id: "4936dfcaa8f43a34c1238d3198008e289460e652f53b8af1839db330924e4f7c"
	I1202 18:52:15.743706   12120 cri.go:89] found id: "667b638fd852e3a6bd295b3ec2fe4a81f9ff03c26a491b1ee7e0fbe59c8521fc"
	I1202 18:52:15.743709   12120 cri.go:89] found id: "99c70d815e876f72cf4d9cd10fd4e32d85136ac40e0d5cc95602f18ae1016634"
	I1202 18:52:15.743714   12120 cri.go:89] found id: "e58c1dd3e1586cbb4f05edcf443678361459280ab623e6ec5d891fdd4cddf914"
	I1202 18:52:15.743717   12120 cri.go:89] found id: "4a404d74a7a809e4126928db8141e8cfc3db7feccab9ab0bb8ccdb6d7090c7f4"
	I1202 18:52:15.743721   12120 cri.go:89] found id: "ede9106262c1d54a1c1f17761a62826e15ce64013602d032c62114bad1be601a"
	I1202 18:52:15.743724   12120 cri.go:89] found id: "86afc0e10ae107e53454d1b963ba0fd331156e8cce9604c67a58771e522bc513"
	I1202 18:52:15.743730   12120 cri.go:89] found id: "eebd27b0fd01962916c6e684b08c3028b20d4bbad139ad7a7905d3e5b5f34f22"
	I1202 18:52:15.743734   12120 cri.go:89] found id: "efde85fa7a639093eacf69a500487d7ac7007a329771c0129876a46b916e4f09"
	I1202 18:52:15.743737   12120 cri.go:89] found id: "e46cf04322b8a87c2488b1befb04008c511a9377168cb1c004c0a1bc2a4af71b"
	I1202 18:52:15.743740   12120 cri.go:89] found id: "c996f159fc220a9dc6cdaabd1cb32a14e1de6a113a7540839a04148323aaa24a"
	I1202 18:52:15.743743   12120 cri.go:89] found id: "ec8ebe2000d713884b36239b7a04ef6eb6ecc877cc79eb563b3bb3959830a297"
	I1202 18:52:15.743747   12120 cri.go:89] found id: "3d93f44570f5dcf0234b286ee2a9faa337b2fdec235f362567abfd103577d4d9"
	I1202 18:52:15.743754   12120 cri.go:89] found id: "a49e8bf8b4a188da6a4a8036fa649b0f570e021e5d16416cc4c962517a1fcb17"
	I1202 18:52:15.743761   12120 cri.go:89] found id: "1c15eef657852be0bbf42fb61164dfea116515c4f3f42787f18ccb5f5fb52baa"
	I1202 18:52:15.743766   12120 cri.go:89] found id: "560101125bfd05d90a803b723d8edd4f14fa136ecf37e52ca7b661fa8d4fa026"
	I1202 18:52:15.743769   12120 cri.go:89] found id: "35dda71f1d492c8ef6c94e32daacc14f94ac23092e91151a51ee0952c6fb6072"
	I1202 18:52:15.743772   12120 cri.go:89] found id: "8e8e87c9645a2de1f8b1007f277c3a98f18e56d56fd39b39e070209d0d165a4b"
	I1202 18:52:15.743782   12120 cri.go:89] found id: "7c076f35e9904ac7d1b7727ac8366972f91bb6a3ac844c2268e8e3880010d753"
	I1202 18:52:15.743786   12120 cri.go:89] found id: "0ebf58658f3b8744072dcdfc939e712346b8a155783b6fbab120c30330b9f55c"
	I1202 18:52:15.743789   12120 cri.go:89] found id: "c2d0298aacf2100f99e946cda08d8c46afa088c386f61ff4e9f4a0aea5e657bc"
	I1202 18:52:15.743792   12120 cri.go:89] found id: ""
	I1202 18:52:15.743841   12120 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 18:52:15.758352   12120 out.go:203] 
	W1202 18:52:15.761266   12120 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:52:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 18:52:15.761300   12120 out.go:285] * 
	* 
	W1202 18:52:15.766085   12120 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 18:52:15.768984   12120 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-391119 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-535807 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-535807 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-khx4m" [1ea6fcf3-2a41-4f05-abe7-d12fd65649a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535807 -n functional-535807
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 19:08:55.714526041 +0000 UTC m=+1255.548826798
functional_test.go:1645: (dbg) Run:  kubectl --context functional-535807 describe po hello-node-connect-7d85dfc575-khx4m -n default
functional_test.go:1645: (dbg) kubectl --context functional-535807 describe po hello-node-connect-7d85dfc575-khx4m -n default:
Name:             hello-node-connect-7d85dfc575-khx4m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-535807/192.168.49.2
Start Time:       Tue, 02 Dec 2025 18:58:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fct8z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fct8z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-khx4m to functional-535807
Normal   Pulling    7m9s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m45s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m32s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-535807 logs hello-node-connect-7d85dfc575-khx4m -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-535807 logs hello-node-connect-7d85dfc575-khx4m -n default: exit status 1 (105.287344ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-khx4m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-535807 logs hello-node-connect-7d85dfc575-khx4m -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-535807 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-khx4m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-535807/192.168.49.2
Start Time:       Tue, 02 Dec 2025 18:58:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fct8z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fct8z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-khx4m to functional-535807
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m46s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-535807 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-535807 logs -l app=hello-node-connect: exit status 1 (87.796038ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-khx4m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-535807 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-535807 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.203.158
IPs:                      10.108.203.158
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31766/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-535807
helpers_test.go:243: (dbg) docker inspect functional-535807:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756",
	        "Created": "2025-12-02T18:56:16.721320324Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T18:56:16.800292825Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756/hostname",
	        "HostsPath": "/var/lib/docker/containers/1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756/hosts",
	        "LogPath": "/var/lib/docker/containers/1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756/1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756-json.log",
	        "Name": "/functional-535807",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-535807:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-535807",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1824ba8e2fa40b1314ed2640f64b6015adb6dfceb5ac7019deeac481ce680756",
	                "LowerDir": "/var/lib/docker/overlay2/61542b8af3b9df2e8b26b7391c93576432f7ac30379914821c3685d36a633515-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61542b8af3b9df2e8b26b7391c93576432f7ac30379914821c3685d36a633515/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61542b8af3b9df2e8b26b7391c93576432f7ac30379914821c3685d36a633515/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61542b8af3b9df2e8b26b7391c93576432f7ac30379914821c3685d36a633515/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-535807",
	                "Source": "/var/lib/docker/volumes/functional-535807/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-535807",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-535807",
	                "name.minikube.sigs.k8s.io": "functional-535807",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98f7f2f92fb2df1836eb043f0616d7efa1d7915c9cdf1d7c69ba3cf800055646",
	            "SandboxKey": "/var/run/docker/netns/98f7f2f92fb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-535807": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:8a:8a:a0:59:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12764b3e5d998b0f1965627006278b4b24cb80f1a6c70165a36dc2b0b54e6dc7",
	                    "EndpointID": "69e356df7216bf1af5c88622f07a63b125f5e2a34b7847da0692f0713700bb58",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-535807",
	                        "1824ba8e2fa4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-535807 -n functional-535807
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 logs -n 25: (1.437397467s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-535807 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ kubectl │ functional-535807 kubectl -- --context functional-535807 get pods                                                          │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ start   │ -p functional-535807 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ service │ invalid-svc -p functional-535807                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ config  │ functional-535807 config unset cpus                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ cp      │ functional-535807 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ config  │ functional-535807 config get cpus                                                                                          │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ config  │ functional-535807 config set cpus 2                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ config  │ functional-535807 config get cpus                                                                                          │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ config  │ functional-535807 config unset cpus                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ ssh     │ functional-535807 ssh -n functional-535807 sudo cat /home/docker/cp-test.txt                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ config  │ functional-535807 config get cpus                                                                                          │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ ssh     │ functional-535807 ssh echo hello                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ cp      │ functional-535807 cp functional-535807:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2566009513/001/cp-test.txt │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ ssh     │ functional-535807 ssh cat /etc/hostname                                                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ ssh     │ functional-535807 ssh -n functional-535807 sudo cat /home/docker/cp-test.txt                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ tunnel  │ functional-535807 tunnel --alsologtostderr                                                                                 │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ tunnel  │ functional-535807 tunnel --alsologtostderr                                                                                 │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ cp      │ functional-535807 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ tunnel  │ functional-535807 tunnel --alsologtostderr                                                                                 │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │                     │
	│ ssh     │ functional-535807 ssh -n functional-535807 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ addons  │ functional-535807 addons list                                                                                              │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	│ addons  │ functional-535807 addons list -o json                                                                                      │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:58:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:58:03.622070   24396 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:58:03.622191   24396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:58:03.622195   24396 out.go:374] Setting ErrFile to fd 2...
	I1202 18:58:03.622199   24396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:58:03.622553   24396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:58:03.622998   24396 out.go:368] Setting JSON to false
	I1202 18:58:03.624304   24396 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2422,"bootTime":1764699462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:58:03.624365   24396 start.go:143] virtualization:  
	I1202 18:58:03.627802   24396 out.go:179] * [functional-535807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 18:58:03.630849   24396 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 18:58:03.630972   24396 notify.go:221] Checking for updates...
	I1202 18:58:03.636843   24396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:58:03.639787   24396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:58:03.642536   24396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:58:03.645442   24396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 18:58:03.648309   24396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 18:58:03.652455   24396 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:58:03.652545   24396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:58:03.687264   24396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:58:03.687389   24396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:58:03.749272   24396 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-02 18:58:03.740238209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:58:03.749362   24396 docker.go:319] overlay module found
	I1202 18:58:03.752311   24396 out.go:179] * Using the docker driver based on existing profile
	I1202 18:58:03.755107   24396 start.go:309] selected driver: docker
	I1202 18:58:03.755116   24396 start.go:927] validating driver "docker" against &{Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:58:03.755217   24396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 18:58:03.755321   24396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:58:03.808100   24396 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-02 18:58:03.798411284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:58:03.808494   24396 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:58:03.808520   24396 cni.go:84] Creating CNI manager for ""
	I1202 18:58:03.808575   24396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:58:03.808613   24396 start.go:353] cluster config:
	{Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:58:03.811691   24396 out.go:179] * Starting "functional-535807" primary control-plane node in "functional-535807" cluster
	I1202 18:58:03.814493   24396 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 18:58:03.817291   24396 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 18:58:03.820224   24396 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:58:03.820274   24396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 18:58:03.820282   24396 cache.go:65] Caching tarball of preloaded images
	I1202 18:58:03.820344   24396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 18:58:03.820375   24396 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 18:58:03.820384   24396 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 18:58:03.820494   24396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/config.json ...
	I1202 18:58:03.838823   24396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 18:58:03.838833   24396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 18:58:03.838845   24396 cache.go:243] Successfully downloaded all kic artifacts
	I1202 18:58:03.838874   24396 start.go:360] acquireMachinesLock for functional-535807: {Name:mk121649ecc31850ef7753a0622a13633affc9d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 18:58:03.838921   24396 start.go:364] duration metric: took 32.647µs to acquireMachinesLock for "functional-535807"
	I1202 18:58:03.838938   24396 start.go:96] Skipping create...Using existing machine configuration
	I1202 18:58:03.838944   24396 fix.go:54] fixHost starting: 
	I1202 18:58:03.839193   24396 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
	I1202 18:58:03.855686   24396 fix.go:112] recreateIfNeeded on functional-535807: state=Running err=<nil>
	W1202 18:58:03.855704   24396 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 18:58:03.858919   24396 out.go:252] * Updating the running docker "functional-535807" container ...
	I1202 18:58:03.858949   24396 machine.go:94] provisionDockerMachine start ...
	I1202 18:58:03.859029   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:03.879984   24396 main.go:143] libmachine: Using SSH client type: native
	I1202 18:58:03.880292   24396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1202 18:58:03.880298   24396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 18:58:04.029285   24396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-535807
	
	I1202 18:58:04.029299   24396 ubuntu.go:182] provisioning hostname "functional-535807"
	I1202 18:58:04.029360   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:04.048076   24396 main.go:143] libmachine: Using SSH client type: native
	I1202 18:58:04.048363   24396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1202 18:58:04.048371   24396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-535807 && echo "functional-535807" | sudo tee /etc/hostname
	I1202 18:58:04.210757   24396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-535807
	
	I1202 18:58:04.210830   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:04.229272   24396 main.go:143] libmachine: Using SSH client type: native
	I1202 18:58:04.229570   24396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1202 18:58:04.229603   24396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-535807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-535807/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-535807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 18:58:04.377927   24396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 18:58:04.377942   24396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 18:58:04.377963   24396 ubuntu.go:190] setting up certificates
	I1202 18:58:04.377974   24396 provision.go:84] configureAuth start
	I1202 18:58:04.378037   24396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-535807
	I1202 18:58:04.395705   24396 provision.go:143] copyHostCerts
	I1202 18:58:04.395764   24396 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 18:58:04.395776   24396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 18:58:04.395850   24396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 18:58:04.395946   24396 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 18:58:04.395954   24396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 18:58:04.395980   24396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 18:58:04.396027   24396 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 18:58:04.396065   24396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 18:58:04.396088   24396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 18:58:04.396129   24396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-535807 san=[127.0.0.1 192.168.49.2 functional-535807 localhost minikube]
	I1202 18:58:04.627782   24396 provision.go:177] copyRemoteCerts
	I1202 18:58:04.627832   24396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 18:58:04.627874   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:04.644992   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:04.749317   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 18:58:04.766285   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 18:58:04.783145   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 18:58:04.799786   24396 provision.go:87] duration metric: took 421.789016ms to configureAuth
	I1202 18:58:04.799801   24396 ubuntu.go:206] setting minikube options for container-runtime
	I1202 18:58:04.800008   24396 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:58:04.800108   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:04.819298   24396 main.go:143] libmachine: Using SSH client type: native
	I1202 18:58:04.819603   24396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1202 18:58:04.819614   24396 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 18:58:10.243437   24396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 18:58:10.243449   24396 machine.go:97] duration metric: took 6.384494197s to provisionDockerMachine
	I1202 18:58:10.243459   24396 start.go:293] postStartSetup for "functional-535807" (driver="docker")
	I1202 18:58:10.243469   24396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 18:58:10.243524   24396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 18:58:10.243576   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:10.264212   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:10.369591   24396 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 18:58:10.373183   24396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 18:58:10.373201   24396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 18:58:10.373211   24396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 18:58:10.373263   24396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 18:58:10.373334   24396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 18:58:10.373404   24396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 18:58:10.373446   24396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 18:58:10.380934   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 18:58:10.398172   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 18:58:10.416221   24396 start.go:296] duration metric: took 172.748257ms for postStartSetup
	I1202 18:58:10.416305   24396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 18:58:10.416342   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:10.435093   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:10.534690   24396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 18:58:10.539254   24396 fix.go:56] duration metric: took 6.700305523s for fixHost
	I1202 18:58:10.539269   24396 start.go:83] releasing machines lock for "functional-535807", held for 6.700340714s
	I1202 18:58:10.539338   24396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-535807
	I1202 18:58:10.556162   24396 ssh_runner.go:195] Run: cat /version.json
	I1202 18:58:10.556204   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:10.556446   24396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 18:58:10.556498   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:10.573314   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:10.585775   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:10.689298   24396 ssh_runner.go:195] Run: systemctl --version
	I1202 18:58:10.783548   24396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 18:58:10.820428   24396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 18:58:10.824718   24396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 18:58:10.824776   24396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 18:58:10.832387   24396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 18:58:10.832408   24396 start.go:496] detecting cgroup driver to use...
	I1202 18:58:10.832437   24396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 18:58:10.832479   24396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 18:58:10.846123   24396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 18:58:10.859123   24396 docker.go:218] disabling cri-docker service (if available) ...
	I1202 18:58:10.859174   24396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 18:58:10.874524   24396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 18:58:10.887800   24396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 18:58:11.025820   24396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 18:58:11.173494   24396 docker.go:234] disabling docker service ...
	I1202 18:58:11.173558   24396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 18:58:11.188717   24396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 18:58:11.201244   24396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 18:58:11.329957   24396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 18:58:11.459376   24396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 18:58:11.472239   24396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 18:58:11.486844   24396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 18:58:11.486906   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.495496   24396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 18:58:11.495551   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.504229   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.512546   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.520844   24396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 18:58:11.528622   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.536844   24396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.544824   24396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 18:58:11.553162   24396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 18:58:11.560232   24396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 18:58:11.567294   24396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:58:11.701952   24396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 18:58:11.960979   24396 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 18:58:11.961035   24396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 18:58:11.964687   24396 start.go:564] Will wait 60s for crictl version
	I1202 18:58:11.964741   24396 ssh_runner.go:195] Run: which crictl
	I1202 18:58:11.968074   24396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 18:58:11.991497   24396 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 18:58:11.991565   24396 ssh_runner.go:195] Run: crio --version
	I1202 18:58:12.019557   24396 ssh_runner.go:195] Run: crio --version
	I1202 18:58:12.052760   24396 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 18:58:12.055690   24396 cli_runner.go:164] Run: docker network inspect functional-535807 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 18:58:12.071791   24396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 18:58:12.079237   24396 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 18:58:12.082373   24396 kubeadm.go:884] updating cluster {Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 18:58:12.082500   24396 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:58:12.082591   24396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:58:12.119012   24396 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:58:12.119024   24396 crio.go:433] Images already preloaded, skipping extraction
	I1202 18:58:12.119075   24396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 18:58:12.145260   24396 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 18:58:12.145271   24396 cache_images.go:86] Images are preloaded, skipping loading
	I1202 18:58:12.145278   24396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.2 crio true true} ...
	I1202 18:58:12.145381   24396 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-535807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 18:58:12.145466   24396 ssh_runner.go:195] Run: crio config
	I1202 18:58:12.217816   24396 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 18:58:12.217834   24396 cni.go:84] Creating CNI manager for ""
	I1202 18:58:12.217842   24396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:58:12.217851   24396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 18:58:12.217880   24396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-535807 NodeName:functional-535807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 18:58:12.217987   24396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-535807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 18:58:12.218049   24396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 18:58:12.225507   24396 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 18:58:12.225563   24396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 18:58:12.232624   24396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1202 18:58:12.244823   24396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 18:58:12.257276   24396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1202 18:58:12.270506   24396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 18:58:12.274131   24396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:58:12.412111   24396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:58:12.425231   24396 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807 for IP: 192.168.49.2
	I1202 18:58:12.425242   24396 certs.go:195] generating shared ca certs ...
	I1202 18:58:12.425256   24396 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:58:12.425379   24396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 18:58:12.425419   24396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 18:58:12.425434   24396 certs.go:257] generating profile certs ...
	I1202 18:58:12.425519   24396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.key
	I1202 18:58:12.425563   24396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/apiserver.key.4b01236b
	I1202 18:58:12.425616   24396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/proxy-client.key
	I1202 18:58:12.425821   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 18:58:12.425854   24396 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 18:58:12.425861   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 18:58:12.425887   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 18:58:12.425914   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 18:58:12.425938   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 18:58:12.425985   24396 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 18:58:12.426548   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 18:58:12.443790   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 18:58:12.461019   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 18:58:12.478225   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 18:58:12.494980   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 18:58:12.512546   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 18:58:12.530150   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 18:58:12.548295   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 18:58:12.565202   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 18:58:12.581920   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 18:58:12.598991   24396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 18:58:12.615884   24396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 18:58:12.628444   24396 ssh_runner.go:195] Run: openssl version
	I1202 18:58:12.634296   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 18:58:12.642341   24396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 18:58:12.645783   24396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 18:56 /usr/share/ca-certificates/44702.pem
	I1202 18:58:12.645832   24396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 18:58:12.686228   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 18:58:12.693921   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 18:58:12.701745   24396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:58:12.705148   24396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:58:12.705205   24396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 18:58:12.745616   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 18:58:12.753284   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 18:58:12.761061   24396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 18:58:12.764652   24396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 18:56 /usr/share/ca-certificates/4470.pem
	I1202 18:58:12.764701   24396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 18:58:12.805330   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 18:58:12.813122   24396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 18:58:12.818655   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 18:58:12.879267   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 18:58:12.923858   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 18:58:12.965785   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 18:58:13.011728   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 18:58:13.054198   24396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 18:58:13.097443   24396 kubeadm.go:401] StartCluster: {Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:58:13.097531   24396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 18:58:13.097602   24396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:58:13.123730   24396 cri.go:89] found id: "4303e19537c66651d3613dee48467060eab1ef1477f2d0247cfcd1afd92903f4"
	I1202 18:58:13.123741   24396 cri.go:89] found id: "ed204dcf52c674d8e9acbb55f141e5ef7ae6e34ea7b7b0e37a1ba2bdadb5fb7a"
	I1202 18:58:13.123745   24396 cri.go:89] found id: "576b7d7e9ca36d721a2793e22c7dcadeb8c0219972e7e4f5b9f41923811697d7"
	I1202 18:58:13.123747   24396 cri.go:89] found id: "a3c1f287c407fba62b997c3acdf6b49fb1a0f418f5f25ee7f7212714611dfd2c"
	I1202 18:58:13.123750   24396 cri.go:89] found id: "f66a74f7d8b724b7ff6a184f4e56632398c3da0c5c62d2036d0aa6bba47848e2"
	I1202 18:58:13.123754   24396 cri.go:89] found id: "f90385fd7003f8f85521d77202516f23344e847950e0ec7a2f608b4091a5e62f"
	I1202 18:58:13.123756   24396 cri.go:89] found id: "86864e7f3db6e0874d63bb11750362e1a144a6229876340f73ee171f16f536fa"
	I1202 18:58:13.123758   24396 cri.go:89] found id: "2b336a3666a706fb60bdf9e0cf3c4c1bf5f3e9d08e2d5a6f550b911dfb89cb13"
	I1202 18:58:13.123760   24396 cri.go:89] found id: ""
	I1202 18:58:13.123812   24396 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 18:58:13.134507   24396 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:58:13Z" level=error msg="open /run/runc: no such file or directory"
	I1202 18:58:13.134578   24396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 18:58:13.142208   24396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 18:58:13.142216   24396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 18:58:13.142264   24396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 18:58:13.149754   24396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 18:58:13.150259   24396 kubeconfig.go:125] found "functional-535807" server: "https://192.168.49.2:8441"
	I1202 18:58:13.152559   24396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 18:58:13.160214   24396 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 18:56:22.198069325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 18:58:12.262703847 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 18:58:13.160223   24396 kubeadm.go:1161] stopping kube-system containers ...
	I1202 18:58:13.160234   24396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 18:58:13.160289   24396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 18:58:13.186428   24396 cri.go:89] found id: "4303e19537c66651d3613dee48467060eab1ef1477f2d0247cfcd1afd92903f4"
	I1202 18:58:13.186439   24396 cri.go:89] found id: "ed204dcf52c674d8e9acbb55f141e5ef7ae6e34ea7b7b0e37a1ba2bdadb5fb7a"
	I1202 18:58:13.186442   24396 cri.go:89] found id: "576b7d7e9ca36d721a2793e22c7dcadeb8c0219972e7e4f5b9f41923811697d7"
	I1202 18:58:13.186445   24396 cri.go:89] found id: "a3c1f287c407fba62b997c3acdf6b49fb1a0f418f5f25ee7f7212714611dfd2c"
	I1202 18:58:13.186447   24396 cri.go:89] found id: "f66a74f7d8b724b7ff6a184f4e56632398c3da0c5c62d2036d0aa6bba47848e2"
	I1202 18:58:13.186450   24396 cri.go:89] found id: "f90385fd7003f8f85521d77202516f23344e847950e0ec7a2f608b4091a5e62f"
	I1202 18:58:13.186452   24396 cri.go:89] found id: "86864e7f3db6e0874d63bb11750362e1a144a6229876340f73ee171f16f536fa"
	I1202 18:58:13.186454   24396 cri.go:89] found id: "2b336a3666a706fb60bdf9e0cf3c4c1bf5f3e9d08e2d5a6f550b911dfb89cb13"
	I1202 18:58:13.186456   24396 cri.go:89] found id: ""
	I1202 18:58:13.186460   24396 cri.go:252] Stopping containers: [4303e19537c66651d3613dee48467060eab1ef1477f2d0247cfcd1afd92903f4 ed204dcf52c674d8e9acbb55f141e5ef7ae6e34ea7b7b0e37a1ba2bdadb5fb7a 576b7d7e9ca36d721a2793e22c7dcadeb8c0219972e7e4f5b9f41923811697d7 a3c1f287c407fba62b997c3acdf6b49fb1a0f418f5f25ee7f7212714611dfd2c f66a74f7d8b724b7ff6a184f4e56632398c3da0c5c62d2036d0aa6bba47848e2 f90385fd7003f8f85521d77202516f23344e847950e0ec7a2f608b4091a5e62f 86864e7f3db6e0874d63bb11750362e1a144a6229876340f73ee171f16f536fa 2b336a3666a706fb60bdf9e0cf3c4c1bf5f3e9d08e2d5a6f550b911dfb89cb13]
	I1202 18:58:13.186519   24396 ssh_runner.go:195] Run: which crictl
	I1202 18:58:13.190114   24396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 4303e19537c66651d3613dee48467060eab1ef1477f2d0247cfcd1afd92903f4 ed204dcf52c674d8e9acbb55f141e5ef7ae6e34ea7b7b0e37a1ba2bdadb5fb7a 576b7d7e9ca36d721a2793e22c7dcadeb8c0219972e7e4f5b9f41923811697d7 a3c1f287c407fba62b997c3acdf6b49fb1a0f418f5f25ee7f7212714611dfd2c f66a74f7d8b724b7ff6a184f4e56632398c3da0c5c62d2036d0aa6bba47848e2 f90385fd7003f8f85521d77202516f23344e847950e0ec7a2f608b4091a5e62f 86864e7f3db6e0874d63bb11750362e1a144a6229876340f73ee171f16f536fa 2b336a3666a706fb60bdf9e0cf3c4c1bf5f3e9d08e2d5a6f550b911dfb89cb13
	I1202 18:58:13.261161   24396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 18:58:13.392918   24396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 18:58:13.400786   24396 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 18:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 18:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  2 18:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 18:56 /etc/kubernetes/scheduler.conf
	
	I1202 18:58:13.400843   24396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 18:58:13.408664   24396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 18:58:13.416142   24396 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 18:58:13.416200   24396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 18:58:13.423540   24396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 18:58:13.431012   24396 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 18:58:13.431064   24396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 18:58:13.438847   24396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 18:58:13.446714   24396 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 18:58:13.446764   24396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 18:58:13.453812   24396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 18:58:13.461223   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:13.509872   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:16.659539   24396 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.149644115s)
	I1202 18:58:16.659602   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:16.885961   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:16.984125   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:17.064046   24396 api_server.go:52] waiting for apiserver process to appear ...
	I1202 18:58:17.064110   24396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:58:17.564434   24396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:58:18.064280   24396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:58:18.082174   24396 api_server.go:72] duration metric: took 1.01812776s to wait for apiserver process to appear ...
	I1202 18:58:18.082192   24396 api_server.go:88] waiting for apiserver healthz status ...
	I1202 18:58:18.082210   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:21.233359   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 18:58:21.233376   24396 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 18:58:21.233387   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:21.322794   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 18:58:21.322816   24396 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 18:58:21.583145   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:21.591497   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 18:58:21.591512   24396 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 18:58:22.082787   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:22.091430   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 18:58:22.091447   24396 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 18:58:22.582792   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:22.591288   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1202 18:58:22.605354   24396 api_server.go:141] control plane version: v1.34.2
	I1202 18:58:22.605369   24396 api_server.go:131] duration metric: took 4.523172544s to wait for apiserver health ...
	I1202 18:58:22.605377   24396 cni.go:84] Creating CNI manager for ""
	I1202 18:58:22.605382   24396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:58:22.609072   24396 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 18:58:22.612077   24396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 18:58:22.616378   24396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 18:58:22.616388   24396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 18:58:22.634532   24396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 18:58:23.102477   24396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 18:58:23.106229   24396 system_pods.go:59] 8 kube-system pods found
	I1202 18:58:23.106255   24396 system_pods.go:61] "coredns-66bc5c9577-ttdx4" [53391897-f401-4ad1-b3ef-fb09e01cdce4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:58:23.106265   24396 system_pods.go:61] "etcd-functional-535807" [70a8c828-dab1-47fc-9e38-c2e3937d5212] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 18:58:23.106271   24396 system_pods.go:61] "kindnet-zj6lg" [38215e4d-ca6a-4fe5-924a-7880b08ebd91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 18:58:23.106277   24396 system_pods.go:61] "kube-apiserver-functional-535807" [bbdc7f87-f144-4612-92e9-e1c431ac676e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 18:58:23.106282   24396 system_pods.go:61] "kube-controller-manager-functional-535807" [e316052b-7d44-4183-9127-f02983157d68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 18:58:23.106291   24396 system_pods.go:61] "kube-proxy-84fv8" [49af23de-6421-4397-9eef-0aae582a73ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 18:58:23.106296   24396 system_pods.go:61] "kube-scheduler-functional-535807" [ed6c1937-6a64-4937-a440-56152f75677e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 18:58:23.106300   24396 system_pods.go:61] "storage-provisioner" [c01d717b-481e-43e8-b93f-dadf345ac947] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 18:58:23.106306   24396 system_pods.go:74] duration metric: took 3.818333ms to wait for pod list to return data ...
	I1202 18:58:23.106312   24396 node_conditions.go:102] verifying NodePressure condition ...
	I1202 18:58:23.109148   24396 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 18:58:23.109169   24396 node_conditions.go:123] node cpu capacity is 2
	I1202 18:58:23.109180   24396 node_conditions.go:105] duration metric: took 2.863818ms to run NodePressure ...
	I1202 18:58:23.109236   24396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 18:58:23.369365   24396 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1202 18:58:23.372670   24396 kubeadm.go:744] kubelet initialised
	I1202 18:58:23.372692   24396 kubeadm.go:745] duration metric: took 3.304978ms waiting for restarted kubelet to initialise ...
	I1202 18:58:23.372717   24396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 18:58:23.382852   24396 ops.go:34] apiserver oom_adj: -16
	I1202 18:58:23.382868   24396 kubeadm.go:602] duration metric: took 10.240641924s to restartPrimaryControlPlane
	I1202 18:58:23.382876   24396 kubeadm.go:403] duration metric: took 10.285443457s to StartCluster
	I1202 18:58:23.382890   24396 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:58:23.382952   24396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:58:23.383542   24396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 18:58:23.383747   24396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 18:58:23.384076   24396 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 18:58:23.384119   24396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 18:58:23.384235   24396 addons.go:70] Setting storage-provisioner=true in profile "functional-535807"
	I1202 18:58:23.384255   24396 addons.go:239] Setting addon storage-provisioner=true in "functional-535807"
	W1202 18:58:23.384259   24396 addons.go:248] addon storage-provisioner should already be in state true
	I1202 18:58:23.384259   24396 addons.go:70] Setting default-storageclass=true in profile "functional-535807"
	I1202 18:58:23.384273   24396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-535807"
	I1202 18:58:23.384279   24396 host.go:66] Checking if "functional-535807" exists ...
	I1202 18:58:23.384607   24396 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
	I1202 18:58:23.384748   24396 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
	I1202 18:58:23.387018   24396 out.go:179] * Verifying Kubernetes components...
	I1202 18:58:23.391494   24396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 18:58:23.431793   24396 addons.go:239] Setting addon default-storageclass=true in "functional-535807"
	W1202 18:58:23.431803   24396 addons.go:248] addon default-storageclass should already be in state true
	I1202 18:58:23.431824   24396 host.go:66] Checking if "functional-535807" exists ...
	I1202 18:58:23.432239   24396 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
	I1202 18:58:23.434732   24396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 18:58:23.440472   24396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:58:23.440483   24396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 18:58:23.440556   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:23.483952   24396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 18:58:23.483965   24396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 18:58:23.484063   24396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 18:58:23.488640   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:23.517923   24396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 18:58:23.641435   24396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 18:58:23.669846   24396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 18:58:23.671403   24396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 18:58:24.480689   24396 node_ready.go:35] waiting up to 6m0s for node "functional-535807" to be "Ready" ...
	I1202 18:58:24.483320   24396 node_ready.go:49] node "functional-535807" is "Ready"
	I1202 18:58:24.483335   24396 node_ready.go:38] duration metric: took 2.629806ms for node "functional-535807" to be "Ready" ...
	I1202 18:58:24.483344   24396 api_server.go:52] waiting for apiserver process to appear ...
	I1202 18:58:24.483400   24396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 18:58:24.491515   24396 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 18:58:24.494380   24396 addons.go:530] duration metric: took 1.110247791s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 18:58:24.501377   24396 api_server.go:72] duration metric: took 1.117606754s to wait for apiserver process to appear ...
	I1202 18:58:24.501390   24396 api_server.go:88] waiting for apiserver healthz status ...
	I1202 18:58:24.501420   24396 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1202 18:58:24.510884   24396 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1202 18:58:24.512342   24396 api_server.go:141] control plane version: v1.34.2
	I1202 18:58:24.512354   24396 api_server.go:131] duration metric: took 10.948528ms to wait for apiserver health ...
	I1202 18:58:24.512361   24396 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 18:58:24.516043   24396 system_pods.go:59] 8 kube-system pods found
	I1202 18:58:24.516061   24396 system_pods.go:61] "coredns-66bc5c9577-ttdx4" [53391897-f401-4ad1-b3ef-fb09e01cdce4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:58:24.516067   24396 system_pods.go:61] "etcd-functional-535807" [70a8c828-dab1-47fc-9e38-c2e3937d5212] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 18:58:24.516072   24396 system_pods.go:61] "kindnet-zj6lg" [38215e4d-ca6a-4fe5-924a-7880b08ebd91] Running
	I1202 18:58:24.516077   24396 system_pods.go:61] "kube-apiserver-functional-535807" [bbdc7f87-f144-4612-92e9-e1c431ac676e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 18:58:24.516088   24396 system_pods.go:61] "kube-controller-manager-functional-535807" [e316052b-7d44-4183-9127-f02983157d68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 18:58:24.516091   24396 system_pods.go:61] "kube-proxy-84fv8" [49af23de-6421-4397-9eef-0aae582a73ff] Running
	I1202 18:58:24.516096   24396 system_pods.go:61] "kube-scheduler-functional-535807" [ed6c1937-6a64-4937-a440-56152f75677e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 18:58:24.516099   24396 system_pods.go:61] "storage-provisioner" [c01d717b-481e-43e8-b93f-dadf345ac947] Running
	I1202 18:58:24.516106   24396 system_pods.go:74] duration metric: took 3.739099ms to wait for pod list to return data ...
	I1202 18:58:24.516112   24396 default_sa.go:34] waiting for default service account to be created ...
	I1202 18:58:24.518306   24396 default_sa.go:45] found service account: "default"
	I1202 18:58:24.518318   24396 default_sa.go:55] duration metric: took 2.201855ms for default service account to be created ...
	I1202 18:58:24.518325   24396 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 18:58:24.521365   24396 system_pods.go:86] 8 kube-system pods found
	I1202 18:58:24.521382   24396 system_pods.go:89] "coredns-66bc5c9577-ttdx4" [53391897-f401-4ad1-b3ef-fb09e01cdce4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 18:58:24.521389   24396 system_pods.go:89] "etcd-functional-535807" [70a8c828-dab1-47fc-9e38-c2e3937d5212] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 18:58:24.521395   24396 system_pods.go:89] "kindnet-zj6lg" [38215e4d-ca6a-4fe5-924a-7880b08ebd91] Running
	I1202 18:58:24.521401   24396 system_pods.go:89] "kube-apiserver-functional-535807" [bbdc7f87-f144-4612-92e9-e1c431ac676e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 18:58:24.521406   24396 system_pods.go:89] "kube-controller-manager-functional-535807" [e316052b-7d44-4183-9127-f02983157d68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 18:58:24.521410   24396 system_pods.go:89] "kube-proxy-84fv8" [49af23de-6421-4397-9eef-0aae582a73ff] Running
	I1202 18:58:24.521416   24396 system_pods.go:89] "kube-scheduler-functional-535807" [ed6c1937-6a64-4937-a440-56152f75677e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 18:58:24.521419   24396 system_pods.go:89] "storage-provisioner" [c01d717b-481e-43e8-b93f-dadf345ac947] Running
	I1202 18:58:24.521424   24396 system_pods.go:126] duration metric: took 3.094678ms to wait for k8s-apps to be running ...
	I1202 18:58:24.521430   24396 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 18:58:24.521484   24396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 18:58:24.535141   24396 system_svc.go:56] duration metric: took 13.703385ms WaitForService to wait for kubelet
	I1202 18:58:24.535159   24396 kubeadm.go:587] duration metric: took 1.15139166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 18:58:24.535177   24396 node_conditions.go:102] verifying NodePressure condition ...
	I1202 18:58:24.537779   24396 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 18:58:24.537794   24396 node_conditions.go:123] node cpu capacity is 2
	I1202 18:58:24.537804   24396 node_conditions.go:105] duration metric: took 2.622849ms to run NodePressure ...
	I1202 18:58:24.537815   24396 start.go:242] waiting for startup goroutines ...
	I1202 18:58:24.537821   24396 start.go:247] waiting for cluster config update ...
	I1202 18:58:24.537830   24396 start.go:256] writing updated cluster config ...
	I1202 18:58:24.538112   24396 ssh_runner.go:195] Run: rm -f paused
	I1202 18:58:24.541741   24396 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:58:24.545631   24396 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ttdx4" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 18:58:26.552034   24396 pod_ready.go:104] pod "coredns-66bc5c9577-ttdx4" is not "Ready", error: <nil>
	W1202 18:58:29.051177   24396 pod_ready.go:104] pod "coredns-66bc5c9577-ttdx4" is not "Ready", error: <nil>
	W1202 18:58:31.551146   24396 pod_ready.go:104] pod "coredns-66bc5c9577-ttdx4" is not "Ready", error: <nil>
	I1202 18:58:32.050799   24396 pod_ready.go:94] pod "coredns-66bc5c9577-ttdx4" is "Ready"
	I1202 18:58:32.050813   24396 pod_ready.go:86] duration metric: took 7.505169845s for pod "coredns-66bc5c9577-ttdx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.053406   24396 pod_ready.go:83] waiting for pod "etcd-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.057977   24396 pod_ready.go:94] pod "etcd-functional-535807" is "Ready"
	I1202 18:58:32.057991   24396 pod_ready.go:86] duration metric: took 4.572076ms for pod "etcd-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.060475   24396 pod_ready.go:83] waiting for pod "kube-apiserver-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.065102   24396 pod_ready.go:94] pod "kube-apiserver-functional-535807" is "Ready"
	I1202 18:58:32.065116   24396 pod_ready.go:86] duration metric: took 4.629338ms for pod "kube-apiserver-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.067373   24396 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.248448   24396 pod_ready.go:94] pod "kube-controller-manager-functional-535807" is "Ready"
	I1202 18:58:32.248463   24396 pod_ready.go:86] duration metric: took 181.077221ms for pod "kube-controller-manager-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.449608   24396 pod_ready.go:83] waiting for pod "kube-proxy-84fv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:32.848865   24396 pod_ready.go:94] pod "kube-proxy-84fv8" is "Ready"
	I1202 18:58:32.848879   24396 pod_ready.go:86] duration metric: took 399.253964ms for pod "kube-proxy-84fv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:33.049418   24396 pod_ready.go:83] waiting for pod "kube-scheduler-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 18:58:35.054400   24396 pod_ready.go:104] pod "kube-scheduler-functional-535807" is not "Ready", error: <nil>
	I1202 18:58:36.554627   24396 pod_ready.go:94] pod "kube-scheduler-functional-535807" is "Ready"
	I1202 18:58:36.554640   24396 pod_ready.go:86] duration metric: took 3.505209531s for pod "kube-scheduler-functional-535807" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 18:58:36.554650   24396 pod_ready.go:40] duration metric: took 12.012887105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 18:58:36.607770   24396 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 18:58:36.610800   24396 out.go:179] * Done! kubectl is now configured to use "functional-535807" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 18:59:13 functional-535807 crio[3549]: time="2025-12-02T18:59:13.087953811Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-bb8gv Namespace:default ID:8db23ae4bc30ccabcff0909f1a895e0e27213bcbb7a946ba8943f381e1ab292d UID:3e5925e9-7392-4388-8f13-7598a5d65071 NetNS:/var/run/netns/d1ba7771-40f4-4577-8c01-ba862cd86da3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078558}] Aliases:map[]}"
	Dec 02 18:59:13 functional-535807 crio[3549]: time="2025-12-02T18:59:13.088246225Z" level=info msg="Checking pod default_hello-node-75c85bcc94-bb8gv for CNI network kindnet (type=ptp)"
	Dec 02 18:59:13 functional-535807 crio[3549]: time="2025-12-02T18:59:13.091853975Z" level=info msg="Ran pod sandbox 8db23ae4bc30ccabcff0909f1a895e0e27213bcbb7a946ba8943f381e1ab292d with infra container: default/hello-node-75c85bcc94-bb8gv/POD" id=6b2d73d3-c582-4912-bb09-236f04346d32 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 18:59:13 functional-535807 crio[3549]: time="2025-12-02T18:59:13.093409294Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8cea8dc3-f998-48ae-8126-3d29532463f0 name=/runtime.v1.ImageService/PullImage
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.027488876Z" level=info msg="Stopping pod sandbox: 2ae19a3baf6ecb4e15d312f2b6fead11ae675189689c7ee6f23eef6a8173cb15" id=c1b87794-2fe3-41ef-9077-57a28a35b8dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.027545531Z" level=info msg="Stopped pod sandbox (already stopped): 2ae19a3baf6ecb4e15d312f2b6fead11ae675189689c7ee6f23eef6a8173cb15" id=c1b87794-2fe3-41ef-9077-57a28a35b8dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.027998343Z" level=info msg="Removing pod sandbox: 2ae19a3baf6ecb4e15d312f2b6fead11ae675189689c7ee6f23eef6a8173cb15" id=45838765-f23a-498e-be20-1869b367cc07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.031426405Z" level=info msg="Removed pod sandbox: 2ae19a3baf6ecb4e15d312f2b6fead11ae675189689c7ee6f23eef6a8173cb15" id=45838765-f23a-498e-be20-1869b367cc07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.031997606Z" level=info msg="Stopping pod sandbox: b28b98b892168020be450c73b88df17afb628905cfb643980dbe0fbe29ad2db8" id=3558f4f7-c805-4e7f-95dc-9bf33e58ccd6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.032049321Z" level=info msg="Stopped pod sandbox (already stopped): b28b98b892168020be450c73b88df17afb628905cfb643980dbe0fbe29ad2db8" id=3558f4f7-c805-4e7f-95dc-9bf33e58ccd6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.032375088Z" level=info msg="Removing pod sandbox: b28b98b892168020be450c73b88df17afb628905cfb643980dbe0fbe29ad2db8" id=3ca60ccd-1e57-4a03-83ee-96e94600318e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.035643098Z" level=info msg="Removed pod sandbox: b28b98b892168020be450c73b88df17afb628905cfb643980dbe0fbe29ad2db8" id=3ca60ccd-1e57-4a03-83ee-96e94600318e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.036128746Z" level=info msg="Stopping pod sandbox: 742c209e1e7116f29c4d80229150b0e95960007af49eafed5bccae974a03fc79" id=d556e7c7-2911-480b-8270-50132f089ea6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.036174266Z" level=info msg="Stopped pod sandbox (already stopped): 742c209e1e7116f29c4d80229150b0e95960007af49eafed5bccae974a03fc79" id=d556e7c7-2911-480b-8270-50132f089ea6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.036473934Z" level=info msg="Removing pod sandbox: 742c209e1e7116f29c4d80229150b0e95960007af49eafed5bccae974a03fc79" id=74f751f0-f871-4c59-8f8a-79a6d9fc0a04 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:17 functional-535807 crio[3549]: time="2025-12-02T18:59:17.039752831Z" level=info msg="Removed pod sandbox: 742c209e1e7116f29c4d80229150b0e95960007af49eafed5bccae974a03fc79" id=74f751f0-f871-4c59-8f8a-79a6d9fc0a04 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 18:59:25 functional-535807 crio[3549]: time="2025-12-02T18:59:25.080861554Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6ca366b5-20ab-492d-a41d-2d637daa2d5c name=/runtime.v1.ImageService/PullImage
	Dec 02 18:59:36 functional-535807 crio[3549]: time="2025-12-02T18:59:36.082877841Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=33e1bff5-178f-407c-9e7b-973517b8ef7e name=/runtime.v1.ImageService/PullImage
	Dec 02 18:59:52 functional-535807 crio[3549]: time="2025-12-02T18:59:52.080827585Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d1ceb9a1-b652-4d82-93d0-c4488263ed66 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:00:17 functional-535807 crio[3549]: time="2025-12-02T19:00:17.08198079Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ad56f007-2956-4da2-bf19-5535c126aa46 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:00:45 functional-535807 crio[3549]: time="2025-12-02T19:00:45.083821828Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=87de7a08-932f-4d66-a2c5-afc1bf2a9f34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:01:46 functional-535807 crio[3549]: time="2025-12-02T19:01:46.080593462Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=06bb5edc-a2be-4196-a9fe-f77d4315eeba name=/runtime.v1.ImageService/PullImage
	Dec 02 19:02:07 functional-535807 crio[3549]: time="2025-12-02T19:02:07.08193119Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f55cc0bf-3355-49db-b984-4603a4233576 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:04:36 functional-535807 crio[3549]: time="2025-12-02T19:04:36.08151126Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=211a5501-e2f5-414a-8c65-3b463b9789cf name=/runtime.v1.ImageService/PullImage
	Dec 02 19:04:51 functional-535807 crio[3549]: time="2025-12-02T19:04:51.081626833Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=954b630f-005b-40b8-becd-c52fae502f75 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f30c5ecfb5829       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   12cd9e37cfa8b       sp-pod                                      default
	2be2377b2789f       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   9399ff5be0167       nginx-svc                                   default
	e2b47172c8517       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   13bad1be54e17       storage-provisioner                         kube-system
	bca7d590498f6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   db28f0959270e       coredns-66bc5c9577-ttdx4                    kube-system
	e1217ec7b60e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   f987dd2062bdf       kindnet-zj6lg                               kube-system
	2ff1391b155c4       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  10 minutes ago      Running             kube-proxy                2                   baf35e710f1cf       kube-proxy-84fv8                            kube-system
	100394aa2f794       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                  10 minutes ago      Running             kube-apiserver            0                   b9bd9b36fec9d       kube-apiserver-functional-535807            kube-system
	8994563747d67       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  10 minutes ago      Running             kube-controller-manager   2                   3480470b655f7       kube-controller-manager-functional-535807   kube-system
	ffd91bf8d144a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  10 minutes ago      Running             kube-scheduler            2                   e653f74aedefa       kube-scheduler-functional-535807            kube-system
	1801f1854af73       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  10 minutes ago      Running             etcd                      2                   08a9f730a8457       etcd-functional-535807                      kube-system
	4303e19537c66       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   13bad1be54e17       storage-provisioner                         kube-system
	ed204dcf52c67       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                  11 minutes ago      Exited              kube-scheduler            1                   e653f74aedefa       kube-scheduler-functional-535807            kube-system
	576b7d7e9ca36       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   db28f0959270e       coredns-66bc5c9577-ttdx4                    kube-system
	a3c1f287c407f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                  11 minutes ago      Exited              kube-proxy                1                   baf35e710f1cf       kube-proxy-84fv8                            kube-system
	f66a74f7d8b72       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   f987dd2062bdf       kindnet-zj6lg                               kube-system
	f90385fd7003f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                  11 minutes ago      Exited              kube-controller-manager   1                   3480470b655f7       kube-controller-manager-functional-535807   kube-system
	2b336a3666a70       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                  11 minutes ago      Exited              etcd                      1                   08a9f730a8457       etcd-functional-535807                      kube-system
	
	
	==> coredns [576b7d7e9ca36d721a2793e22c7dcadeb8c0219972e7e4f5b9f41923811697d7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35038 - 40596 "HINFO IN 1631179033094065180.6768672040703838988. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021791337s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bca7d590498f6b21d889fa8388d05ab42bf9d69f45ae457e5c1a56ad5f0f88c7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55553 - 55894 "HINFO IN 4644816588028296590.3887251890432809423. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041565616s
	
	
	==> describe nodes <==
	Name:               functional-535807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-535807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-535807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T18_56_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 18:56:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-535807
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:08:43 +0000   Tue, 02 Dec 2025 18:56:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:08:43 +0000   Tue, 02 Dec 2025 18:56:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:08:43 +0000   Tue, 02 Dec 2025 18:56:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:08:43 +0000   Tue, 02 Dec 2025 18:57:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-535807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                91cba286-a6f5-40ce-968c-074353bae363
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bb8gv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-khx4m          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-ttdx4                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-535807                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-zj6lg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-535807             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-535807    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-84fv8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-535807             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-535807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-535807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-535807 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-535807 event: Registered Node functional-535807 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-535807 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-535807 event: Registered Node functional-535807 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-535807 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-535807 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-535807 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-535807 event: Registered Node functional-535807 in Controller
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1801f1854af731ca049edc21194910aa9c710df08ba9885dc07e9d67d52bbd65] <==
	{"level":"warn","ts":"2025-12-02T18:58:19.835823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:19.894417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:19.929839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:19.939192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:19.968916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:19.996390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.029166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.069876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.128764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.170504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.250823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.254039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.275788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.308240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.338441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.370599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.402166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.441734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.481534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.498622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.510649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:58:20.607564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38624","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T19:08:18.635712Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1091}
	{"level":"info","ts":"2025-12-02T19:08:18.659892Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1091,"took":"23.829979ms","hash":1574799112,"current-db-size-bytes":3264512,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-12-02T19:08:18.659943Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1574799112,"revision":1091,"compact-revision":-1}
	
	
	==> etcd [2b336a3666a706fb60bdf9e0cf3c4c1bf5f3e9d08e2d5a6f550b911dfb89cb13] <==
	{"level":"warn","ts":"2025-12-02T18:57:41.805283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:41.826330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:41.859166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:41.897952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:41.900789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:41.919359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T18:57:42.028732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49644","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T18:58:05.001180Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T18:58:05.001238Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-535807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T18:58:05.001327Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T18:58:05.147560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-02T18:58:05.147733Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T18:58:05.147761Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T18:58:05.147770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-02T18:58:05.147735Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T18:58:05.147826Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T18:58:05.147869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T18:58:05.147914Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-12-02T18:58:05.147917Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T18:58:05.147963Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T18:58:05.147985Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T18:58:05.151934Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T18:58:05.152028Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T18:58:05.152065Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T18:58:05.152075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-535807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:08:57 up 51 min,  0 user,  load average: 0.08, 0.29, 0.41
	Linux functional-535807 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1217ec7b60e140833e1ad07c6eeec55f35ac0d443ead270d1f26ce74438c050] <==
	I1202 19:06:52.698503       1 main.go:301] handling current node
	I1202 19:07:02.698366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:02.698412       1 main.go:301] handling current node
	I1202 19:07:12.698684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:12.698718       1 main.go:301] handling current node
	I1202 19:07:22.697850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:22.697879       1 main.go:301] handling current node
	I1202 19:07:32.698202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:32.698236       1 main.go:301] handling current node
	I1202 19:07:42.701769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:42.701802       1 main.go:301] handling current node
	I1202 19:07:52.704505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:07:52.704613       1 main.go:301] handling current node
	I1202 19:08:02.704933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:02.704968       1 main.go:301] handling current node
	I1202 19:08:12.705716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:12.705748       1 main.go:301] handling current node
	I1202 19:08:22.706054       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:22.706090       1 main.go:301] handling current node
	I1202 19:08:32.703244       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:32.703289       1 main.go:301] handling current node
	I1202 19:08:42.701180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:42.701319       1 main.go:301] handling current node
	I1202 19:08:52.705020       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:08:52.705053       1 main.go:301] handling current node
	
	
	==> kindnet [f66a74f7d8b724b7ff6a184f4e56632398c3da0c5c62d2036d0aa6bba47848e2] <==
	I1202 18:57:38.779279       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 18:57:38.796662       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 18:57:38.796790       1 main.go:148] setting mtu 1500 for CNI 
	I1202 18:57:38.796803       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 18:57:38.796817       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T18:57:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 18:57:38.931190       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 18:57:39.017811       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 18:57:39.017851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 18:57:39.018531       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 18:57:43.017947       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 18:57:43.017979       1 metrics.go:72] Registering metrics
	I1202 18:57:43.018027       1 controller.go:711] "Syncing nftables rules"
	I1202 18:57:48.931298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:57:48.931379       1 main.go:301] handling current node
	I1202 18:57:58.930810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 18:57:58.930861       1 main.go:301] handling current node
	
	
	==> kube-apiserver [100394aa2f7940340026c918ab73644d23f2219dfa8ffffb7fb89e62f31a7660] <==
	I1202 18:58:21.373620       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 18:58:21.374990       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 18:58:21.376857       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 18:58:21.376960       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 18:58:21.400810       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 18:58:21.419616       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 18:58:21.431076       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 18:58:21.435666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 18:58:22.020960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 18:58:22.128753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 18:58:23.096025       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 18:58:23.243650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 18:58:23.340161       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 18:58:23.352145       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 18:58:24.720411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 18:58:24.969020       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 18:58:25.020467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 18:58:39.867699       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.172.163"}
	I1202 18:58:46.587060       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.46.155"}
	I1202 18:58:55.364037       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.203.158"}
	E1202 18:59:04.412012       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47308: use of closed network connection
	E1202 18:59:05.339333       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1202 18:59:12.643646       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47360: use of closed network connection
	I1202 18:59:12.833598       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.153.93"}
	I1202 19:08:21.323967       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8994563747d67e384109c22a43a5f040316e98b1379cac6be6f54708e036cf2a] <==
	I1202 18:58:24.663756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 18:58:24.665997       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 18:58:24.666072       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 18:58:24.668388       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 18:58:24.670543       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 18:58:24.674837       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 18:58:24.674905       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 18:58:24.674930       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 18:58:24.674935       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 18:58:24.674941       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 18:58:24.678342       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 18:58:24.679640       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 18:58:24.684865       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 18:58:24.689175       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 18:58:24.693435       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 18:58:24.693455       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 18:58:24.693638       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 18:58:24.693735       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-535807"
	I1202 18:58:24.693778       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 18:58:24.696714       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 18:58:24.701207       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 18:58:24.722054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 18:58:24.728402       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 18:58:24.728442       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 18:58:24.728451       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f90385fd7003f8f85521d77202516f23344e847950e0ec7a2f608b4091a5e62f] <==
	I1202 18:57:46.115835       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 18:57:46.115935       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 18:57:46.116221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1202 18:57:46.116288       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 18:57:46.118368       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 18:57:46.120126       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 18:57:46.120237       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 18:57:46.121333       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 18:57:46.123302       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 18:57:46.124576       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 18:57:46.126849       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1202 18:57:46.129191       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 18:57:46.130565       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 18:57:46.130977       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 18:57:46.134201       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 18:57:46.152496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 18:57:46.157637       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 18:57:46.164223       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 18:57:46.165244       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 18:57:46.165295       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 18:57:46.165299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 18:57:46.165383       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 18:57:46.165512       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 18:57:46.169046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 18:57:46.179293       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [2ff1391b155c45c038af60990925d420e26ce928574fb0020746dd009cf902d8] <==
	I1202 18:58:22.528532       1 server_linux.go:53] "Using iptables proxy"
	I1202 18:58:22.627351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 18:58:22.727446       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 18:58:22.727480       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 18:58:22.727566       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 18:58:22.759092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 18:58:22.759232       1 server_linux.go:132] "Using iptables Proxier"
	I1202 18:58:22.763704       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 18:58:22.764029       1 server.go:527] "Version info" version="v1.34.2"
	I1202 18:58:22.764253       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:58:22.770802       1 config.go:200] "Starting service config controller"
	I1202 18:58:22.770828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 18:58:22.770845       1 config.go:106] "Starting endpoint slice config controller"
	I1202 18:58:22.770849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 18:58:22.770879       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 18:58:22.770884       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 18:58:22.771503       1 config.go:309] "Starting node config controller"
	I1202 18:58:22.771521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 18:58:22.771527       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 18:58:22.870961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 18:58:22.870999       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 18:58:22.871040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a3c1f287c407fba62b997c3acdf6b49fb1a0f418f5f25ee7f7212714611dfd2c] <==
	I1202 18:57:40.217432       1 server_linux.go:53] "Using iptables proxy"
	I1202 18:57:41.537730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 18:57:43.165273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 18:57:43.165314       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 18:57:43.165381       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 18:57:43.298016       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 18:57:43.298067       1 server_linux.go:132] "Using iptables Proxier"
	I1202 18:57:43.356397       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 18:57:43.356682       1 server.go:527] "Version info" version="v1.34.2"
	I1202 18:57:43.356698       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:57:43.366359       1 config.go:200] "Starting service config controller"
	I1202 18:57:43.375474       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 18:57:43.375597       1 config.go:106] "Starting endpoint slice config controller"
	I1202 18:57:43.375606       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 18:57:43.375642       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 18:57:43.375652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 18:57:43.386316       1 config.go:309] "Starting node config controller"
	I1202 18:57:43.386337       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 18:57:43.386344       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 18:57:43.475680       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 18:57:43.475744       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 18:57:43.475758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ed204dcf52c674d8e9acbb55f141e5ef7ae6e34ea7b7b0e37a1ba2bdadb5fb7a] <==
	I1202 18:57:39.912633       1 serving.go:386] Generated self-signed cert in-memory
	W1202 18:57:42.852101       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 18:57:42.852131       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 18:57:42.852150       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 18:57:42.852159       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 18:57:42.976570       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 18:57:42.985720       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:57:42.994107       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 18:57:42.994318       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:57:42.995578       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:57:42.995078       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 18:57:43.105967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:58:04.998629       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 18:58:04.998740       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 18:58:04.998752       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 18:58:04.998814       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:58:04.998848       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 18:58:04.998874       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ffd91bf8d144ae57bbcb98a58658e258c643c350367c82e108fcc17014fe663a] <==
	I1202 18:58:18.530016       1 serving.go:386] Generated self-signed cert in-memory
	W1202 18:58:21.246118       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 18:58:21.246224       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 18:58:21.246262       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 18:58:21.246311       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 18:58:21.334975       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 18:58:21.335064       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 18:58:21.347159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 18:58:21.353794       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 18:58:21.353852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:58:21.355279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 18:58:21.455972       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:06:14 functional-535807 kubelet[3870]: E1202 19:06:14.080537    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:06:20 functional-535807 kubelet[3870]: E1202 19:06:20.080563    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:06:27 functional-535807 kubelet[3870]: E1202 19:06:27.080472    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:06:35 functional-535807 kubelet[3870]: E1202 19:06:35.079845    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:06:41 functional-535807 kubelet[3870]: E1202 19:06:41.080924    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:06:47 functional-535807 kubelet[3870]: E1202 19:06:47.080136    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:06:52 functional-535807 kubelet[3870]: E1202 19:06:52.079867    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:06:59 functional-535807 kubelet[3870]: E1202 19:06:59.080626    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:07:03 functional-535807 kubelet[3870]: E1202 19:07:03.080504    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:07:11 functional-535807 kubelet[3870]: E1202 19:07:11.080298    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:07:18 functional-535807 kubelet[3870]: E1202 19:07:18.079832    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:07:25 functional-535807 kubelet[3870]: E1202 19:07:25.080840    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:07:33 functional-535807 kubelet[3870]: E1202 19:07:33.080174    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:07:40 functional-535807 kubelet[3870]: E1202 19:07:40.079963    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:07:45 functional-535807 kubelet[3870]: E1202 19:07:45.080864    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:07:53 functional-535807 kubelet[3870]: E1202 19:07:53.080447    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:07:58 functional-535807 kubelet[3870]: E1202 19:07:58.080068    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:08:08 functional-535807 kubelet[3870]: E1202 19:08:08.080358    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:08:09 functional-535807 kubelet[3870]: E1202 19:08:09.079807    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:08:21 functional-535807 kubelet[3870]: E1202 19:08:21.079965    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:08:22 functional-535807 kubelet[3870]: E1202 19:08:22.080079    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:08:35 functional-535807 kubelet[3870]: E1202 19:08:35.080062    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:08:37 functional-535807 kubelet[3870]: E1202 19:08:37.080777    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	Dec 02 19:08:48 functional-535807 kubelet[3870]: E1202 19:08:48.080120    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-khx4m" podUID="1ea6fcf3-2a41-4f05-abe7-d12fd65649a6"
	Dec 02 19:08:50 functional-535807 kubelet[3870]: E1202 19:08:50.080508    3870 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bb8gv" podUID="3e5925e9-7392-4388-8f13-7598a5d65071"
	
	
	==> storage-provisioner [4303e19537c66651d3613dee48467060eab1ef1477f2d0247cfcd1afd92903f4] <==
	I1202 18:57:39.998851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 18:57:43.144701       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 18:57:43.146443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 18:57:43.156256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:57:46.610851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:57:50.870727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:57:54.476707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:57:57.530889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:00.554037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:00.559878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 18:58:00.560055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 18:58:00.560528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9529310a-331d-4d65-a2c7-431759cfcf3d", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-535807_c196b2aa-04d0-454d-9957-35caf41b7ec8 became leader
	I1202 18:58:00.562462       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-535807_c196b2aa-04d0-454d-9957-35caf41b7ec8!
	W1202 18:58:00.573466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:00.586229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 18:58:00.663370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-535807_c196b2aa-04d0-454d-9957-35caf41b7ec8!
	W1202 18:58:02.589445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:02.595694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:04.599304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 18:58:04.608960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e2b47172c8517409ab23865960c5d75dfca75f7ca6a040bb51617df198d4f5fe] <==
	W1202 19:08:32.640108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:34.643651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:34.650601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:36.654645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:36.659308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:38.661894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:38.668618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:40.671018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:40.675305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:42.678834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:42.682955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:44.686544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:44.692796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:46.696861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:46.701474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:48.704971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:48.709376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:50.712723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:50.719778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:52.722868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:52.728163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:54.733133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:54.737581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:56.741594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:08:56.749018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535807 -n functional-535807
helpers_test.go:269: (dbg) Run:  kubectl --context functional-535807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-bb8gv hello-node-connect-7d85dfc575-khx4m
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-535807 describe pod hello-node-75c85bcc94-bb8gv hello-node-connect-7d85dfc575-khx4m
helpers_test.go:290: (dbg) kubectl --context functional-535807 describe pod hello-node-75c85bcc94-bb8gv hello-node-connect-7d85dfc575-khx4m:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-bb8gv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535807/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 18:59:12 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9mcqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bb8gv to functional-535807
	  Normal   Pulling    6m51s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m51s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m51s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m33s (x21 over 9m45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m33s (x21 over 9m45s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-khx4m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-535807/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 18:58:55 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fct8z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fct8z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-khx4m to functional-535807
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m48s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m35s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-535807 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-535807 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bb8gv" [3e5925e9-7392-4388-8f13-7598a5d65071] Pending
helpers_test.go:352: "hello-node-75c85bcc94-bb8gv" [3e5925e9-7392-4388-8f13-7598a5d65071] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1202 18:59:41.227724    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:01:57.357713    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:02:25.069031    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:06:57.357129    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-535807 -n functional-535807
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-02 19:09:13.287306135 +0000 UTC m=+1273.121606892
functional_test.go:1460: (dbg) Run:  kubectl --context functional-535807 describe po hello-node-75c85bcc94-bb8gv -n default
functional_test.go:1460: (dbg) kubectl --context functional-535807 describe po hello-node-75c85bcc94-bb8gv -n default:
Name:             hello-node-75c85bcc94-bb8gv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-535807/192.168.49.2
Start Time:       Tue, 02 Dec 2025 18:59:12 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcqn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9mcqn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bb8gv to functional-535807
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-535807 logs hello-node-75c85bcc94-bb8gv -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-535807 logs hello-node-75c85bcc94-bb8gv -n default: exit status 1 (118.573671ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bb8gv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-535807 logs hello-node-75c85bcc94-bb8gv -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 service --namespace=default --https --url hello-node: exit status 115 (500.43261ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31159
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-535807 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 service hello-node --url --format={{.IP}}: exit status 115 (464.976537ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-535807 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 service hello-node --url: exit status 115 (477.749251ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31159
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-535807 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31159
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image load --daemon kicbase/echo-server:functional-535807 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-535807" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image load --daemon kicbase/echo-server:functional-535807 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-535807" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-535807
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image load --daemon kicbase/echo-server:functional-535807 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-535807" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image save kicbase/echo-server:functional-535807 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 19:09:26.345338   32224 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:09:26.345490   32224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:26.345495   32224 out.go:374] Setting ErrFile to fd 2...
	I1202 19:09:26.345500   32224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:26.345996   32224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:09:26.347029   32224 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:09:26.347306   32224 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:09:26.348226   32224 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
	I1202 19:09:26.366259   32224 ssh_runner.go:195] Run: systemctl --version
	I1202 19:09:26.366328   32224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
	I1202 19:09:26.383326   32224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
	I1202 19:09:26.484163   32224 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1202 19:09:26.484229   32224 cache_images.go:255] Failed to load cached images for "functional-535807": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1202 19:09:26.484251   32224 cache_images.go:267] failed pushing to: functional-535807

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-535807
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image save --daemon kicbase/echo-server:functional-535807 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-535807
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-535807: exit status 1 (17.631261ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-535807

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-535807

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (506.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1202 19:11:57.357364    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:20.433439    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.175976    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.182392    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.193736    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.215207    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.256664    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.338141    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.499595    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:46.821352    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:47.463566    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:48.744980    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:51.307934    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:13:56.430256    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:14:06.672478    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:14:27.153886    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:15:08.116118    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:16:30.037687    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:16:57.356906    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m24.94297176s)

                                                
                                                
-- stdout --
	* [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:45791
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:45791 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001247756s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000361573s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000361573s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 6 (303.044595ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 19:18:01.168577   39976 status.go:458] kubeconfig endpoint: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image save kicbase/echo-server:functional-535807 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image rm kicbase/echo-server:functional-535807 --alsologtostderr                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image save --daemon kicbase/echo-server:functional-535807 --alsologtostderr                                                             │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/test/nested/copy/4470/hosts                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/4470.pem                                                                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/4470.pem                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/44702.pem                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/44702.pem                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format short --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh pgrep buildkitd                                                                                                                     │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ image          │ functional-535807 image ls --format yaml --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                                      │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:09:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:09:35.958152   33841 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:09:35.958269   33841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:35.958273   33841 out.go:374] Setting ErrFile to fd 2...
	I1202 19:09:35.958276   33841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:35.958535   33841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:09:35.958928   33841 out.go:368] Setting JSON to false
	I1202 19:09:35.959699   33841 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3114,"bootTime":1764699462,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:09:35.959753   33841 start.go:143] virtualization:  
	I1202 19:09:35.961778   33841 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:09:35.963260   33841 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:09:35.963345   33841 notify.go:221] Checking for updates...
	I1202 19:09:35.965949   33841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:09:35.967862   33841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:09:35.969448   33841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:09:35.970941   33841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:09:35.972271   33841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:09:35.973825   33841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:09:35.994851   33841 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:09:35.994978   33841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:09:36.074455   33841 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:09:36.065039804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:09:36.074549   33841 docker.go:319] overlay module found
	I1202 19:09:36.076529   33841 out.go:179] * Using the docker driver based on user configuration
	I1202 19:09:36.077904   33841 start.go:309] selected driver: docker
	I1202 19:09:36.077911   33841 start.go:927] validating driver "docker" against <nil>
	I1202 19:09:36.077923   33841 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:09:36.078649   33841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:09:36.133891   33841 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:09:36.12505936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:09:36.134026   33841 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:09:36.134256   33841 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:09:36.135846   33841 out.go:179] * Using Docker driver with root privileges
	I1202 19:09:36.137159   33841 cni.go:84] Creating CNI manager for ""
	I1202 19:09:36.137217   33841 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:09:36.137224   33841 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 19:09:36.137295   33841 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:09:36.139069   33841 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:09:36.140436   33841 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:09:36.141899   33841 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:09:36.143546   33841 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:09:36.143618   33841 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:09:36.163054   33841 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:09:36.163065   33841 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:09:36.210277   33841 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:09:36.454592   33841 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:09:36.454878   33841 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.454933   33841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:09:36.454960   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json: {Name:mk8b82cf15245127a546687df4677965b77a38be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:36.454969   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:09:36.454978   33841 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.901µs
	I1202 19:09:36.454990   33841 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:09:36.455000   33841 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455053   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:09:36.455058   33841 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 58.509µs
	I1202 19:09:36.455062   33841 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:09:36.455071   33841 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455097   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:09:36.455101   33841 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.073µs
	I1202 19:09:36.455112   33841 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:09:36.455114   33841 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:09:36.455126   33841 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455133   33841 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455152   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:09:36.455156   33841 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.753µs
	I1202 19:09:36.455161   33841 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:09:36.455171   33841 start.go:364] duration metric: took 30.908µs to acquireMachinesLock for "functional-374330"
	I1202 19:09:36.455168   33841 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455193   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:09:36.455197   33841 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 29.538µs
	I1202 19:09:36.455201   33841 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:09:36.455209   33841 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455231   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:09:36.455186   33841 start.go:93] Provisioning new machine with config: &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:09:36.455235   33841 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 27.289µs
	I1202 19:09:36.455241   33841 start.go:125] createHost starting for "" (driver="docker")
	I1202 19:09:36.455244   33841 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:09:36.455251   33841 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455274   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:09:36.455278   33841 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 27.61µs
	I1202 19:09:36.455282   33841 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:09:36.455297   33841 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:09:36.455326   33841 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:09:36.455329   33841 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.967µs
	I1202 19:09:36.455334   33841 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:09:36.455339   33841 cache.go:87] Successfully saved all images to host disk.
	I1202 19:09:36.456907   33841 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1202 19:09:36.457143   33841 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:45791 to docker env.
	I1202 19:09:36.457206   33841 start.go:159] libmachine.API.Create for "functional-374330" (driver="docker")
	I1202 19:09:36.457226   33841 client.go:173] LocalClient.Create starting
	I1202 19:09:36.457299   33841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem
	I1202 19:09:36.457330   33841 main.go:143] libmachine: Decoding PEM data...
	I1202 19:09:36.457343   33841 main.go:143] libmachine: Parsing certificate...
	I1202 19:09:36.457409   33841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem
	I1202 19:09:36.457424   33841 main.go:143] libmachine: Decoding PEM data...
	I1202 19:09:36.457435   33841 main.go:143] libmachine: Parsing certificate...
	I1202 19:09:36.458331   33841 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 19:09:36.480611   33841 cli_runner.go:211] docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 19:09:36.480683   33841 network_create.go:284] running [docker network inspect functional-374330] to gather additional debugging logs...
	I1202 19:09:36.480699   33841 cli_runner.go:164] Run: docker network inspect functional-374330
	W1202 19:09:36.496085   33841 cli_runner.go:211] docker network inspect functional-374330 returned with exit code 1
	I1202 19:09:36.496104   33841 network_create.go:287] error running [docker network inspect functional-374330]: docker network inspect functional-374330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-374330 not found
	I1202 19:09:36.496116   33841 network_create.go:289] output of [docker network inspect functional-374330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-374330 not found
	
	** /stderr **
	I1202 19:09:36.496242   33841 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:09:36.512637   33841 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c9ea0}
	I1202 19:09:36.512680   33841 network_create.go:124] attempt to create docker network functional-374330 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 19:09:36.512749   33841 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-374330 functional-374330
	I1202 19:09:36.561946   33841 network_create.go:108] docker network functional-374330 192.168.49.0/24 created
	I1202 19:09:36.561967   33841 kic.go:121] calculated static IP "192.168.49.2" for the "functional-374330" container
	I1202 19:09:36.562047   33841 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 19:09:36.577084   33841 cli_runner.go:164] Run: docker volume create functional-374330 --label name.minikube.sigs.k8s.io=functional-374330 --label created_by.minikube.sigs.k8s.io=true
	I1202 19:09:36.593836   33841 oci.go:103] Successfully created a docker volume functional-374330
	I1202 19:09:36.593904   33841 cli_runner.go:164] Run: docker run --rm --name functional-374330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-374330 --entrypoint /usr/bin/test -v functional-374330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 19:09:37.023372   33841 oci.go:107] Successfully prepared a docker volume functional-374330
	I1202 19:09:37.023435   33841 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1202 19:09:37.023570   33841 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 19:09:37.023691   33841 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 19:09:37.089828   33841 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-374330 --name functional-374330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-374330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-374330 --network functional-374330 --ip 192.168.49.2 --volume functional-374330:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 19:09:37.333631   33841 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Running}}
	I1202 19:09:37.353479   33841 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:09:37.383241   33841 cli_runner.go:164] Run: docker exec functional-374330 stat /var/lib/dpkg/alternatives/iptables
	I1202 19:09:37.437489   33841 oci.go:144] the created container "functional-374330" has a running status.
	I1202 19:09:37.437513   33841 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa...
	I1202 19:09:37.985406   33841 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 19:09:38.020308   33841 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:09:38.053933   33841 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 19:09:38.053944   33841 kic_runner.go:114] Args: [docker exec --privileged functional-374330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 19:09:38.115804   33841 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:09:38.147306   33841 machine.go:94] provisionDockerMachine start ...
	I1202 19:09:38.147399   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:38.175530   33841 main.go:143] libmachine: Using SSH client type: native
	I1202 19:09:38.175865   33841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:09:38.175871   33841 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:09:38.329778   33841 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:09:38.329791   33841 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:09:38.329865   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:38.353183   33841 main.go:143] libmachine: Using SSH client type: native
	I1202 19:09:38.353565   33841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:09:38.353574   33841 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:09:38.518977   33841 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:09:38.519063   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:38.537609   33841 main.go:143] libmachine: Using SSH client type: native
	I1202 19:09:38.537940   33841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:09:38.537963   33841 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:09:38.685623   33841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:09:38.685639   33841 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:09:38.685692   33841 ubuntu.go:190] setting up certificates
	I1202 19:09:38.685700   33841 provision.go:84] configureAuth start
	I1202 19:09:38.685760   33841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:09:38.702820   33841 provision.go:143] copyHostCerts
	I1202 19:09:38.702870   33841 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:09:38.702877   33841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:09:38.702952   33841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:09:38.703079   33841 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:09:38.703083   33841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:09:38.703108   33841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:09:38.703160   33841 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:09:38.703165   33841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:09:38.703187   33841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:09:38.703229   33841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:09:38.834607   33841 provision.go:177] copyRemoteCerts
	I1202 19:09:38.834664   33841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:09:38.834704   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:38.851863   33841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:09:38.956992   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:09:38.973820   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:09:38.993514   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:09:39.010440   33841 provision.go:87] duration metric: took 324.720375ms to configureAuth
	I1202 19:09:39.010457   33841 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:09:39.010638   33841 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:09:39.010736   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:39.032054   33841 main.go:143] libmachine: Using SSH client type: native
	I1202 19:09:39.032378   33841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:09:39.032390   33841 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:09:39.323811   33841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:09:39.323823   33841 machine.go:97] duration metric: took 1.176506246s to provisionDockerMachine
	I1202 19:09:39.323833   33841 client.go:176] duration metric: took 2.866601703s to LocalClient.Create
	I1202 19:09:39.323851   33841 start.go:167] duration metric: took 2.866644903s to libmachine.API.Create "functional-374330"
	I1202 19:09:39.323857   33841 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:09:39.323867   33841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:09:39.323937   33841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:09:39.323975   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:39.342143   33841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:09:39.445694   33841 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:09:39.449131   33841 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:09:39.449148   33841 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:09:39.449157   33841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:09:39.449212   33841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:09:39.449299   33841 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:09:39.449382   33841 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:09:39.449430   33841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:09:39.456825   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:09:39.473737   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:09:39.490627   33841 start.go:296] duration metric: took 166.756556ms for postStartSetup
	I1202 19:09:39.490967   33841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:09:39.507504   33841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:09:39.507772   33841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:09:39.507810   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:39.524730   33841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:09:39.626662   33841 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:09:39.631288   33841 start.go:128] duration metric: took 3.176008542s to createHost
	I1202 19:09:39.631303   33841 start.go:83] releasing machines lock for "functional-374330", held for 3.1761257s
	I1202 19:09:39.631373   33841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:09:39.651507   33841 out.go:179] * Found network options:
	I1202 19:09:39.652907   33841 out.go:179]   - HTTP_PROXY=localhost:45791
	W1202 19:09:39.654349   33841 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1202 19:09:39.656613   33841 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1202 19:09:39.658170   33841 ssh_runner.go:195] Run: cat /version.json
	I1202 19:09:39.658191   33841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:09:39.658211   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:39.658244   33841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:09:39.679215   33841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:09:39.679104   33841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:09:39.873478   33841 ssh_runner.go:195] Run: systemctl --version
	I1202 19:09:39.879682   33841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:09:39.914871   33841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:09:39.919172   33841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:09:39.919233   33841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:09:39.943892   33841 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 19:09:39.943906   33841 start.go:496] detecting cgroup driver to use...
	I1202 19:09:39.943936   33841 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:09:39.943982   33841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:09:39.962015   33841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:09:39.974561   33841 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:09:39.974612   33841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:09:39.991804   33841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:09:40.010209   33841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:09:40.145246   33841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:09:40.272738   33841 docker.go:234] disabling docker service ...
	I1202 19:09:40.272793   33841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:09:40.296382   33841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:09:40.310141   33841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:09:40.426051   33841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:09:40.541856   33841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:09:40.554555   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:09:40.567639   33841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:09:40.567700   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.576240   33841 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:09:40.576299   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.584710   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.592920   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.601562   33841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:09:40.609400   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.618032   33841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.630989   33841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:09:40.639194   33841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:09:40.646250   33841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:09:40.653368   33841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:09:40.768526   33841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:09:40.930700   33841 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:09:40.930760   33841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:09:40.934411   33841 start.go:564] Will wait 60s for crictl version
	I1202 19:09:40.934462   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:40.937962   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:09:40.961923   33841 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:09:40.961993   33841 ssh_runner.go:195] Run: crio --version
	I1202 19:09:40.988633   33841 ssh_runner.go:195] Run: crio --version
	I1202 19:09:41.019365   33841 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:09:41.020961   33841 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:09:41.039840   33841 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:09:41.043632   33841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:09:41.053491   33841 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:09:41.053609   33841 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:09:41.053698   33841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:09:41.077168   33841 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 19:09:41.077182   33841 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 19:09:41.077231   33841 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:41.077419   33841 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.077506   33841 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.077575   33841 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.077642   33841 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.077740   33841 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 19:09:41.077866   33841 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.077922   33841 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.079034   33841 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.079414   33841 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:41.079627   33841 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.079782   33841 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.079893   33841 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.080098   33841 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.080235   33841 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 19:09:41.080377   33841 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.387380   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.395323   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.397375   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.398374   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.398437   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.404267   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 19:09:41.404721   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.496980   33841 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 19:09:41.497012   33841 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.497070   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.530475   33841 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 19:09:41.530508   33841 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.530564   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.554137   33841 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 19:09:41.554169   33841 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.554224   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.554302   33841 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 19:09:41.554314   33841 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.554331   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.554378   33841 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 19:09:41.554391   33841 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.554409   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.554468   33841 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 19:09:41.554477   33841 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 19:09:41.554495   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.567634   33841 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 19:09:41.567661   33841 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.567717   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:41.567801   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.567850   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.567939   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 19:09:41.567986   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.568032   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.568090   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.670686   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 19:09:41.670696   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.670764   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.670794   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.670836   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.670871   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.670890   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.774087   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 19:09:41.774164   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 19:09:41.774217   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 19:09:41.774268   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 19:09:41.774320   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 19:09:41.774376   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 19:09:41.774429   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.880355   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 19:09:41.880359   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 19:09:41.880418   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 19:09:41.880449   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 19:09:41.880468   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 19:09:41.880497   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 19:09:41.880534   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 19:09:41.880536   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 19:09:41.880574   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 19:09:41.880592   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 19:09:41.880610   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 19:09:41.880656   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 19:09:41.880703   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 19:09:41.914358   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 19:09:41.914384   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 19:09:41.914411   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 19:09:41.914441   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 19:09:41.914449   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 19:09:41.914491   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 19:09:41.914499   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 19:09:41.914520   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 19:09:41.914540   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 19:09:41.914549   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 19:09:41.914572   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 19:09:41.914579   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 19:09:41.914608   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 19:09:41.914619   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 19:09:41.933140   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 19:09:41.933161   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 19:09:42.048595   33841 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 19:09:42.048657   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1202 19:09:42.405404   33841 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 19:09:42.405590   33841 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:42.570123   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 19:09:42.570143   33841 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 19:09:42.570183   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 19:09:42.633706   33841 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 19:09:42.633735   33841 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:42.633780   33841 ssh_runner.go:195] Run: which crictl
	I1202 19:09:44.043629   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.473419081s)
	I1202 19:09:44.043648   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 19:09:44.043648   33841 ssh_runner.go:235] Completed: which crictl: (1.409854176s)
	I1202 19:09:44.043668   33841 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 19:09:44.043714   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:44.043727   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 19:09:44.078706   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:45.959044   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.915296534s)
	I1202 19:09:45.959062   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 19:09:45.959078   33841 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 19:09:45.959122   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 19:09:45.959187   33841 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880470397s)
	I1202 19:09:45.959215   33841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:09:45.992350   33841 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 19:09:45.992440   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 19:09:47.146312   33841 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.153853872s)
	I1202 19:09:47.146333   33841 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 19:09:47.146362   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 19:09:47.146420   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.187287368s)
	I1202 19:09:47.146427   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 19:09:47.146442   33841 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 19:09:47.146476   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 19:09:48.369884   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.223385746s)
	I1202 19:09:48.369902   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 19:09:48.369929   33841 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 19:09:48.369979   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 19:09:49.640314   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.270308075s)
	I1202 19:09:49.640333   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 19:09:49.640350   33841 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 19:09:49.640394   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 19:09:50.985093   33841 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.344678053s)
	I1202 19:09:50.985109   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 19:09:50.985133   33841 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 19:09:50.985179   33841 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 19:09:51.558713   33841 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 19:09:51.558736   33841 cache_images.go:125] Successfully loaded all cached images
	I1202 19:09:51.558740   33841 cache_images.go:94] duration metric: took 10.48154634s to LoadCachedImages
	I1202 19:09:51.558751   33841 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:09:51.558830   33841 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:09:51.558907   33841 ssh_runner.go:195] Run: crio config
	I1202 19:09:51.620619   33841 cni.go:84] Creating CNI manager for ""
	I1202 19:09:51.620629   33841 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:09:51.620652   33841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:09:51.620681   33841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:09:51.620860   33841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:09:51.620957   33841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:09:51.628626   33841 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 19:09:51.628675   33841 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:09:51.636281   33841 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 19:09:51.636323   33841 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 19:09:51.636361   33841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:09:51.636376   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 19:09:51.636454   33841 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 19:09:51.636517   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 19:09:51.652346   33841 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 19:09:51.652373   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 19:09:51.652425   33841 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 19:09:51.652454   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 19:09:51.652498   33841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 19:09:51.663617   33841 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 19:09:51.663659   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 19:09:52.459550   33841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:09:52.467776   33841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:09:52.480883   33841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:09:52.494669   33841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:09:52.507885   33841 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:09:52.511562   33841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:09:52.521329   33841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:09:52.648165   33841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:09:52.665081   33841 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:09:52.665092   33841 certs.go:195] generating shared ca certs ...
	I1202 19:09:52.665116   33841 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:52.665273   33841 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:09:52.665313   33841 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:09:52.665319   33841 certs.go:257] generating profile certs ...
	I1202 19:09:52.665386   33841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:09:52.665395   33841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt with IP's: []
	I1202 19:09:52.756453   33841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt ...
	I1202 19:09:52.756469   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: {Name:mkb79e87b857e21b00c824135ae2752f665592ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:52.756669   33841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key ...
	I1202 19:09:52.756676   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key: {Name:mkde4c35ed9922f38dff463d9804c833f5e219b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:52.756768   33841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:09:52.756779   33841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt.b350056b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 19:09:53.085312   33841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt.b350056b ...
	I1202 19:09:53.085333   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt.b350056b: {Name:mk604bc9fb338eca66a3f89767d9e5a0bf312776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:53.085533   33841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b ...
	I1202 19:09:53.085542   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b: {Name:mk173f07f9fc2dbddbfdce406d3d6fc9c91608b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:53.085626   33841 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt.b350056b -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt
	I1202 19:09:53.085735   33841 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key
	I1202 19:09:53.085790   33841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:09:53.085803   33841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt with IP's: []
	I1202 19:09:53.353944   33841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt ...
	I1202 19:09:53.353959   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt: {Name:mkfc556f5fd0339202a455691b8aa6b7de03ee95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:53.354159   33841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key ...
	I1202 19:09:53.354167   33841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key: {Name:mkbecbec08121fd2e5a038c3129bdded181aeb88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:09:53.354357   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:09:53.354398   33841 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:09:53.354406   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:09:53.354431   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:09:53.354454   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:09:53.354481   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:09:53.354524   33841 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:09:53.355066   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:09:53.372257   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:09:53.389749   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:09:53.408627   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:09:53.425738   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:09:53.442129   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:09:53.459410   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:09:53.476584   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:09:53.493715   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:09:53.510648   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:09:53.527955   33841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:09:53.546639   33841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:09:53.559340   33841 ssh_runner.go:195] Run: openssl version
	I1202 19:09:53.565790   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:09:53.574447   33841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:09:53.578883   33841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:09:53.578951   33841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:09:53.620822   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:09:53.629524   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:09:53.637954   33841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:09:53.641570   33841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:09:53.641624   33841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:09:53.684864   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:09:53.693622   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:09:53.702227   33841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:09:53.706074   33841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:09:53.706128   33841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:09:53.747372   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:09:53.755242   33841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:09:53.758578   33841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:09:53.758618   33841 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:09:53.758692   33841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:09:53.758743   33841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:09:53.785002   33841 cri.go:89] found id: ""
	I1202 19:09:53.785063   33841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:09:53.792990   33841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:09:53.800295   33841 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:09:53.800346   33841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:09:53.807833   33841 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:09:53.807851   33841 kubeadm.go:158] found existing configuration files:
	
	I1202 19:09:53.807898   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:09:53.815461   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:09:53.815522   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:09:53.822836   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:09:53.830105   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:09:53.830166   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:09:53.837302   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:09:53.844861   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:09:53.844924   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:09:53.852228   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:09:53.860133   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:09:53.860190   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:09:53.867486   33841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:09:53.904700   33841 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:09:53.904898   33841 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:09:53.970497   33841 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:09:53.970587   33841 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:09:53.970626   33841 kubeadm.go:319] OS: Linux
	I1202 19:09:53.970677   33841 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:09:53.970727   33841 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:09:53.970776   33841 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:09:53.970826   33841 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:09:53.970877   33841 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:09:53.970941   33841 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:09:53.970989   33841 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:09:53.971039   33841 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:09:53.971087   33841 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:09:54.034089   33841 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:09:54.034192   33841 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:09:54.034287   33841 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:09:54.050558   33841 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:09:54.059429   33841 out.go:252]   - Generating certificates and keys ...
	I1202 19:09:54.059545   33841 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:09:54.059644   33841 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:09:54.357937   33841 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 19:09:54.637197   33841 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 19:09:54.901999   33841 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 19:09:55.318333   33841 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 19:09:55.418725   33841 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 19:09:55.419027   33841 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:09:55.526504   33841 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 19:09:55.526862   33841 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 19:09:55.706883   33841 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 19:09:56.104178   33841 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 19:09:56.374706   33841 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 19:09:56.374929   33841 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:09:56.628112   33841 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:09:56.918985   33841 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:09:57.111597   33841 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:09:57.332887   33841 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:09:57.728082   33841 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:09:57.729016   33841 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:09:57.734174   33841 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:09:57.746000   33841 out.go:252]   - Booting up control plane ...
	I1202 19:09:57.746148   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:09:57.746234   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:09:57.746301   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:09:57.759393   33841 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:09:57.759732   33841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:09:57.769284   33841 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:09:57.769811   33841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:09:57.769854   33841 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:09:57.908692   33841 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:09:57.908808   33841 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:13:57.909738   33841 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001247756s
	I1202 19:13:57.909765   33841 kubeadm.go:319] 
	I1202 19:13:57.909821   33841 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:13:57.909853   33841 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:13:57.909967   33841 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:13:57.909971   33841 kubeadm.go:319] 
	I1202 19:13:57.910077   33841 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:13:57.910108   33841 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:13:57.910141   33841 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:13:57.910144   33841 kubeadm.go:319] 
	I1202 19:13:57.915962   33841 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:13:57.916410   33841 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:13:57.916530   33841 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:13:57.916781   33841 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:13:57.916785   33841 kubeadm.go:319] 
	I1202 19:13:57.916855   33841 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 19:13:57.916967   33841 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-374330 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001247756s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 19:13:57.917065   33841 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:13:58.332305   33841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:13:58.344960   33841 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:13:58.345011   33841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:13:58.353020   33841 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:13:58.353028   33841 kubeadm.go:158] found existing configuration files:
	
	I1202 19:13:58.353078   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:13:58.360530   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:13:58.360580   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:13:58.368002   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:13:58.375460   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:13:58.375511   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:13:58.382678   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:13:58.389814   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:13:58.389863   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:13:58.396819   33841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:13:58.404154   33841 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:13:58.404204   33841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:13:58.411338   33841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:13:58.448317   33841 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:13:58.448638   33841 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:13:58.522214   33841 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:13:58.522292   33841 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:13:58.522333   33841 kubeadm.go:319] OS: Linux
	I1202 19:13:58.522378   33841 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:13:58.522432   33841 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:13:58.522495   33841 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:13:58.522543   33841 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:13:58.522620   33841 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:13:58.522682   33841 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:13:58.522740   33841 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:13:58.522789   33841 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:13:58.522849   33841 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:13:58.587244   33841 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:13:58.587413   33841 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:13:58.587510   33841 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:13:58.600922   33841 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:13:58.606260   33841 out.go:252]   - Generating certificates and keys ...
	I1202 19:13:58.606370   33841 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:13:58.606445   33841 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:13:58.606527   33841 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:13:58.606592   33841 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:13:58.606666   33841 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:13:58.606722   33841 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:13:58.606788   33841 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:13:58.606853   33841 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:13:58.606932   33841 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:13:58.607009   33841 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:13:58.607221   33841 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:13:58.607287   33841 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:13:58.682611   33841 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:13:58.980899   33841 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:13:59.682159   33841 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:13:59.763645   33841 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:14:00.126462   33841 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:14:00.126610   33841 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:14:00.137249   33841 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:14:00.142658   33841 out.go:252]   - Booting up control plane ...
	I1202 19:14:00.142760   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:14:00.142837   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:14:00.142904   33841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:14:00.165133   33841 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:14:00.165251   33841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:14:00.181923   33841 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:14:00.182018   33841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:14:00.182066   33841 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:14:00.403975   33841 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:14:00.404113   33841 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:18:00.403934   33841 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000361573s
	I1202 19:18:00.403954   33841 kubeadm.go:319] 
	I1202 19:18:00.404037   33841 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:18:00.404077   33841 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:18:00.404196   33841 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:18:00.404200   33841 kubeadm.go:319] 
	I1202 19:18:00.404334   33841 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:18:00.404375   33841 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:18:00.404407   33841 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:18:00.404411   33841 kubeadm.go:319] 
	I1202 19:18:00.412090   33841 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:18:00.412507   33841 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:18:00.412615   33841 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:18:00.412851   33841 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:18:00.412856   33841 kubeadm.go:319] 
	I1202 19:18:00.412924   33841 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 19:18:00.412976   33841 kubeadm.go:403] duration metric: took 8m6.654361154s to StartCluster
	I1202 19:18:00.413008   33841 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:18:00.413074   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:18:00.445175   33841 cri.go:89] found id: ""
	I1202 19:18:00.445190   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.445197   33841 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:18:00.445207   33841 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:18:00.445271   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:18:00.472604   33841 cri.go:89] found id: ""
	I1202 19:18:00.472618   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.472625   33841 logs.go:284] No container was found matching "etcd"
	I1202 19:18:00.472630   33841 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:18:00.472693   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:18:00.499501   33841 cri.go:89] found id: ""
	I1202 19:18:00.499515   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.499522   33841 logs.go:284] No container was found matching "coredns"
	I1202 19:18:00.499527   33841 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:18:00.499584   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:18:00.526469   33841 cri.go:89] found id: ""
	I1202 19:18:00.526482   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.526489   33841 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:18:00.526494   33841 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:18:00.526551   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:18:00.562110   33841 cri.go:89] found id: ""
	I1202 19:18:00.562123   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.562130   33841 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:18:00.562136   33841 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:18:00.562193   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:18:00.590912   33841 cri.go:89] found id: ""
	I1202 19:18:00.590925   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.590932   33841 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:18:00.590945   33841 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:18:00.591000   33841 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:18:00.616269   33841 cri.go:89] found id: ""
	I1202 19:18:00.616283   33841 logs.go:282] 0 containers: []
	W1202 19:18:00.616290   33841 logs.go:284] No container was found matching "kindnet"
	I1202 19:18:00.616297   33841 logs.go:123] Gathering logs for container status ...
	I1202 19:18:00.616308   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:18:00.643542   33841 logs.go:123] Gathering logs for kubelet ...
	I1202 19:18:00.643556   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:18:00.714814   33841 logs.go:123] Gathering logs for dmesg ...
	I1202 19:18:00.714831   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:18:00.725432   33841 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:18:00.725446   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:18:00.784690   33841 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:18:00.777282    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.777875    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.779425    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.779871    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.781334    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:18:00.777282    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.777875    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.779425    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.779871    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:00.781334    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:18:00.784701   33841 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:18:00.784711   33841 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1202 19:18:00.827855   33841 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000361573s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 19:18:00.827896   33841 out.go:285] * 
	W1202 19:18:00.828011   33841 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000361573s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:18:00.828068   33841 out.go:285] * 
	W1202 19:18:00.830219   33841 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:18:00.835835   33841 out.go:203] 
	W1202 19:18:00.838586   33841 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000361573s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:18:00.838630   33841 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 19:18:00.838648   33841 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 19:18:00.841875   33841 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:09:41 functional-374330 crio[843]: time="2025-12-02T19:09:41.911517174Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-beta.0 not found" id=8f5bcabd-46aa-4847-b140-5ed8177a6a49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:41 functional-374330 crio[843]: time="2025-12-02T19:09:41.911570465Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-beta.0 found" id=8f5bcabd-46aa-4847-b140-5ed8177a6a49 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.068188513Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=85fa9646-007b-4720-b8b3-97c95d3a8564 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.068667788Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=85fa9646-007b-4720-b8b3-97c95d3a8564 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.068730835Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=85fa9646-007b-4720-b8b3-97c95d3a8564 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.103121421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6946cea0-8437-4b85-bc77-2ce012d8b984 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.103281139Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=6946cea0-8437-4b85-bc77-2ce012d8b984 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:44 functional-374330 crio[843]: time="2025-12-02T19:09:44.103319308Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=6946cea0-8437-4b85-bc77-2ce012d8b984 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:45 functional-374330 crio[843]: time="2025-12-02T19:09:45.982263028Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=11bca854-7860-4978-844b-b125292dad04 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:45 functional-374330 crio[843]: time="2025-12-02T19:09:45.98254449Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=11bca854-7860-4978-844b-b125292dad04 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:45 functional-374330 crio[843]: time="2025-12-02T19:09:45.982585917Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=11bca854-7860-4978-844b-b125292dad04 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.038801323Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a7ed778d-47cd-4260-af17-dbd032df1128 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.042019162Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=242dd3e2-1493-41dc-ba57-ded9b1483834 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.043626215Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=f28e4ea3-c9cc-4e79-896b-7cff7d0493d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.045170615Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a5349ee0-a3c5-4145-8467-8ce8aa9d1fb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.046104427Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=08b769e7-7e09-4d06-9653-7bac27705ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.047480552Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=384765c7-603e-4211-9c44-45226dc9bd77 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:09:54 functional-374330 crio[843]: time="2025-12-02T19:09:54.048368852Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5ff30fa2-3b35-4fe1-85b2-66d5e8649bc0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.590200346Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=20e5bb1f-623e-4532-aa3b-9080ff6795d7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.592113092Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=05d791b9-a732-462e-9ec7-fe17bc6854df name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.593536215Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=7143647d-22cd-4a6a-897b-7f918164d982 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.595266836Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e2fbe0a9-87c8-44ae-8547-cf41ac983dc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.596306149Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=452b9c59-fa49-4f8e-bf08-c844c05952ba name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.597984906Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=eeb1560b-c037-47da-9f02-c39920b6864d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:13:58 functional-374330 crio[843]: time="2025-12-02T19:13:58.59898536Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=6fb1dda8-1bf5-4cdc-8ecc-8b0e02b99cb8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:18:01.812667    5625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:01.813142    5625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:01.814711    5625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:01.815232    5625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:18:01.816776    5625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:18:01 up  1:00,  0 user,  load average: 0.06, 0.25, 0.39
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:17:59 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:17:59 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 19:17:59 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:17:59 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:17:59 functional-374330 kubelet[5432]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:17:59 functional-374330 kubelet[5432]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:17:59 functional-374330 kubelet[5432]: E1202 19:17:59.836433    5432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:17:59 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:17:59 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:18:00 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 19:18:00 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:18:00 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:18:00 functional-374330 kubelet[5475]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:18:00 functional-374330 kubelet[5475]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:18:00 functional-374330 kubelet[5475]: E1202 19:18:00.600463    5475 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:18:00 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:18:00 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:18:01 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 19:18:01 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:18:01 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:18:01 functional-374330 kubelet[5538]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:18:01 functional-374330 kubelet[5538]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:18:01 functional-374330 kubelet[5538]: E1202 19:18:01.356576    5538 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:18:01 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:18:01 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 6 (430.943027ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 19:18:02.382100   40193 status.go:458] kubeconfig endpoint: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (506.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 19:18:02.398307    4470 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --alsologtostderr -v=8
E1202 19:18:46.176003    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:19:13.879305    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:21:57.357599    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:23:46.175469    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-374330 --alsologtostderr -v=8: exit status 80 (6m5.917306987s)

                                                
                                                
-- stdout --
	* [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:18:02.458749   40272 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:18:02.458868   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.458880   40272 out.go:374] Setting ErrFile to fd 2...
	I1202 19:18:02.458886   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.459160   40272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:18:02.459549   40272 out.go:368] Setting JSON to false
	I1202 19:18:02.460340   40272 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3621,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:18:02.460405   40272 start.go:143] virtualization:  
	I1202 19:18:02.464020   40272 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:18:02.467892   40272 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:18:02.467969   40272 notify.go:221] Checking for updates...
	I1202 19:18:02.474021   40272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:18:02.477064   40272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:02.480130   40272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:18:02.483164   40272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:18:02.486142   40272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:18:02.489587   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:02.489732   40272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:18:02.527318   40272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:18:02.527492   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.584790   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.575369586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.584902   40272 docker.go:319] overlay module found
	I1202 19:18:02.588038   40272 out.go:179] * Using the docker driver based on existing profile
	I1202 19:18:02.590861   40272 start.go:309] selected driver: docker
	I1202 19:18:02.590885   40272 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.591008   40272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:18:02.591102   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.644457   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.635623623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.644867   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:02.644933   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:02.644976   40272 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.648222   40272 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:18:02.651050   40272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:18:02.654072   40272 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:18:02.657154   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:02.657223   40272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:18:02.676274   40272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:18:02.676298   40272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:18:02.730421   40272 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:18:02.934277   40272 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:18:02.934463   40272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:18:02.934535   40272 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934623   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:18:02.934634   40272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.203µs
	I1202 19:18:02.934648   40272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:18:02.934660   40272 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934690   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:18:02.934695   40272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.324µs
	I1202 19:18:02.934701   40272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934707   40272 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:18:02.934711   40272 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934738   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:18:02.934736   40272 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934743   40272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.525µs
	I1202 19:18:02.934750   40272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934759   40272 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934774   40272 start.go:364] duration metric: took 25.468µs to acquireMachinesLock for "functional-374330"
	I1202 19:18:02.934787   40272 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:18:02.934789   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:18:02.934792   40272 fix.go:54] fixHost starting: 
	I1202 19:18:02.934794   40272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 35.864µs
	I1202 19:18:02.934800   40272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934809   40272 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934834   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:18:02.934845   40272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 19:18:02.934851   40272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934859   40272 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934885   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:18:02.934890   40272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.983µs
	I1202 19:18:02.934895   40272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:18:02.934913   40272 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934941   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:18:02.934946   40272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.707µs
	I1202 19:18:02.934951   40272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:18:02.934960   40272 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934985   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:18:02.934990   40272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.646µs
	I1202 19:18:02.934995   40272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:18:02.935015   40272 cache.go:87] Successfully saved all images to host disk.
	I1202 19:18:02.935074   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:02.953213   40272 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:18:02.953249   40272 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:18:02.956557   40272 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:18:02.956597   40272 machine.go:94] provisionDockerMachine start ...
	I1202 19:18:02.956677   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:02.973977   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:02.974301   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:02.974316   40272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:18:03.125393   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.125419   40272 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:18:03.125485   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.143103   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.143432   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.143449   40272 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:18:03.303153   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.303231   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.322823   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.323149   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.323170   40272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:18:03.473999   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:18:03.474027   40272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:18:03.474048   40272 ubuntu.go:190] setting up certificates
	I1202 19:18:03.474072   40272 provision.go:84] configureAuth start
	I1202 19:18:03.474137   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:03.492443   40272 provision.go:143] copyHostCerts
	I1202 19:18:03.492497   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492535   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:18:03.492553   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492631   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:18:03.492733   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492755   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:18:03.492763   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492791   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:18:03.492852   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492873   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:18:03.492880   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492905   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:18:03.492966   40272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:18:03.672249   40272 provision.go:177] copyRemoteCerts
	I1202 19:18:03.672315   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:18:03.672360   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.690216   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:03.793601   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:18:03.793730   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:18:03.811690   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:18:03.811788   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:18:03.829853   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:18:03.829937   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:18:03.847063   40272 provision.go:87] duration metric: took 372.963339ms to configureAuth
	I1202 19:18:03.847135   40272 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:18:03.847323   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:03.847434   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.865504   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.865829   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.865845   40272 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:18:04.201120   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:18:04.201145   40272 machine.go:97] duration metric: took 1.244539118s to provisionDockerMachine
	I1202 19:18:04.201156   40272 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:18:04.201184   40272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:18:04.201288   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:18:04.201334   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.219464   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.321684   40272 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:18:04.325089   40272 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 19:18:04.325149   40272 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 19:18:04.325168   40272 command_runner.go:130] > VERSION_ID="12"
	I1202 19:18:04.325186   40272 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 19:18:04.325207   40272 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 19:18:04.325237   40272 command_runner.go:130] > ID=debian
	I1202 19:18:04.325255   40272 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 19:18:04.325286   40272 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 19:18:04.325319   40272 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 19:18:04.325987   40272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:18:04.326040   40272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:18:04.326062   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:18:04.326146   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:18:04.326256   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:18:04.326282   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:18:04.326394   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:18:04.326431   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> /etc/test/nested/copy/4470/hosts
	I1202 19:18:04.326515   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:18:04.334852   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:04.354617   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:18:04.371951   40272 start.go:296] duration metric: took 170.764596ms for postStartSetup
	I1202 19:18:04.372028   40272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:18:04.372100   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.388603   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.485826   40272 command_runner.go:130] > 12%
	I1202 19:18:04.486229   40272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:18:04.490474   40272 command_runner.go:130] > 172G
	I1202 19:18:04.490820   40272 fix.go:56] duration metric: took 1.556023913s for fixHost
	I1202 19:18:04.490841   40272 start.go:83] releasing machines lock for "functional-374330", held for 1.55605912s
	I1202 19:18:04.490913   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:04.507171   40272 ssh_runner.go:195] Run: cat /version.json
	I1202 19:18:04.507212   40272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:18:04.507223   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.507284   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.524406   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.524835   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.718816   40272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 19:18:04.718877   40272 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 19:18:04.719015   40272 ssh_runner.go:195] Run: systemctl --version
	I1202 19:18:04.724818   40272 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 19:18:04.724852   40272 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 19:18:04.725306   40272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:18:04.761633   40272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 19:18:04.765941   40272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 19:18:04.765984   40272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:18:04.766036   40272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:18:04.775671   40272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:18:04.775697   40272 start.go:496] detecting cgroup driver to use...
	I1202 19:18:04.775733   40272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:18:04.775798   40272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:18:04.790690   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:18:04.805178   40272 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:18:04.805246   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:18:04.821173   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:18:04.835737   40272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:18:04.950984   40272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:18:05.087151   40272 docker.go:234] disabling docker service ...
	I1202 19:18:05.087235   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:18:05.103857   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:18:05.118486   40272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:18:05.244193   40272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:18:05.357860   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:18:05.370494   40272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:18:05.383221   40272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 19:18:05.384408   40272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:18:05.384504   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.393298   40272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:18:05.393384   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.402265   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.411107   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.420227   40272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:18:05.428585   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.437313   40272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.445677   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.454485   40272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:18:05.461070   40272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 19:18:05.462061   40272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:18:05.469806   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:05.580364   40272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:18:05.753810   40272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:18:05.753880   40272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:18:05.759122   40272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 19:18:05.759148   40272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 19:18:05.759155   40272 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 19:18:05.759163   40272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:05.759168   40272 command_runner.go:130] > Access: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759176   40272 command_runner.go:130] > Modify: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759183   40272 command_runner.go:130] > Change: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759187   40272 command_runner.go:130] >  Birth: -
	I1202 19:18:05.759949   40272 start.go:564] Will wait 60s for crictl version
	I1202 19:18:05.760004   40272 ssh_runner.go:195] Run: which crictl
	I1202 19:18:05.764137   40272 command_runner.go:130] > /usr/local/bin/crictl
	I1202 19:18:05.765127   40272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:18:05.790594   40272 command_runner.go:130] > Version:  0.1.0
	I1202 19:18:05.790618   40272 command_runner.go:130] > RuntimeName:  cri-o
	I1202 19:18:05.790833   40272 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 19:18:05.791045   40272 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 19:18:05.793417   40272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:18:05.793500   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.827591   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.827617   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.827624   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.827633   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.827640   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.827654   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.827661   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.827671   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.827679   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.827682   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.827686   40272 command_runner.go:130] >      static
	I1202 19:18:05.827702   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.827705   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.827713   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.827719   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.827727   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.827733   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.827740   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.827750   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.827762   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.829485   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.856217   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.856241   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.856248   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.856254   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.856260   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.856264   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.856268   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.856272   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.856277   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.856281   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.856285   40272 command_runner.go:130] >      static
	I1202 19:18:05.856288   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.856292   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.856297   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.856300   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.856307   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.856311   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.856315   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.856333   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.856342   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.862922   40272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:18:05.865574   40272 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:18:05.881617   40272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:18:05.885365   40272 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 19:18:05.885465   40272 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:18:05.885585   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:05.885631   40272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:18:05.915386   40272 command_runner.go:130] > {
	I1202 19:18:05.915407   40272 command_runner.go:130] >   "images":  [
	I1202 19:18:05.915412   40272 command_runner.go:130] >     {
	I1202 19:18:05.915425   40272 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 19:18:05.915430   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915436   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 19:18:05.915440   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915443   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915458   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 19:18:05.915465   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915469   40272 command_runner.go:130] >       "size":  "29035622",
	I1202 19:18:05.915474   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915478   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915484   40272 command_runner.go:130] >     },
	I1202 19:18:05.915487   40272 command_runner.go:130] >     {
	I1202 19:18:05.915494   40272 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 19:18:05.915501   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915507   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 19:18:05.915511   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915523   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915531   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 19:18:05.915535   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915542   40272 command_runner.go:130] >       "size":  "74488375",
	I1202 19:18:05.915547   40272 command_runner.go:130] >       "username":  "nonroot",
	I1202 19:18:05.915550   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915553   40272 command_runner.go:130] >     },
	I1202 19:18:05.915562   40272 command_runner.go:130] >     {
	I1202 19:18:05.915572   40272 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 19:18:05.915585   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915590   40272 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 19:18:05.915593   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915597   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915618   40272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 19:18:05.915626   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915630   40272 command_runner.go:130] >       "size":  "60854229",
	I1202 19:18:05.915634   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915637   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915641   40272 command_runner.go:130] >       },
	I1202 19:18:05.915645   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915652   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915661   40272 command_runner.go:130] >     },
	I1202 19:18:05.915666   40272 command_runner.go:130] >     {
	I1202 19:18:05.915681   40272 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 19:18:05.915686   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915691   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 19:18:05.915697   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915702   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915710   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 19:18:05.915713   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915718   40272 command_runner.go:130] >       "size":  "84947242",
	I1202 19:18:05.915721   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915725   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915728   40272 command_runner.go:130] >       },
	I1202 19:18:05.915736   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915743   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915746   40272 command_runner.go:130] >     },
	I1202 19:18:05.915750   40272 command_runner.go:130] >     {
	I1202 19:18:05.915756   40272 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 19:18:05.915762   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915771   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 19:18:05.915778   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915782   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915790   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 19:18:05.915797   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915805   40272 command_runner.go:130] >       "size":  "72167568",
	I1202 19:18:05.915809   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915813   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915816   40272 command_runner.go:130] >       },
	I1202 19:18:05.915820   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915824   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915828   40272 command_runner.go:130] >     },
	I1202 19:18:05.915831   40272 command_runner.go:130] >     {
	I1202 19:18:05.915841   40272 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 19:18:05.915852   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915858   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 19:18:05.915861   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915866   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915880   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 19:18:05.915883   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915887   40272 command_runner.go:130] >       "size":  "74105124",
	I1202 19:18:05.915891   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915896   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915902   40272 command_runner.go:130] >     },
	I1202 19:18:05.915906   40272 command_runner.go:130] >     {
	I1202 19:18:05.915912   40272 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 19:18:05.915917   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915925   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 19:18:05.915930   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915934   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915943   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 19:18:05.915949   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915953   40272 command_runner.go:130] >       "size":  "49819792",
	I1202 19:18:05.915961   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915968   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915972   40272 command_runner.go:130] >       },
	I1202 19:18:05.915976   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915982   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915988   40272 command_runner.go:130] >     },
	I1202 19:18:05.915992   40272 command_runner.go:130] >     {
	I1202 19:18:05.915999   40272 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 19:18:05.916003   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.916010   40272 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.916014   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916018   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.916027   40272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 19:18:05.916043   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916046   40272 command_runner.go:130] >       "size":  "517328",
	I1202 19:18:05.916049   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.916054   40272 command_runner.go:130] >         "value":  "65535"
	I1202 19:18:05.916064   40272 command_runner.go:130] >       },
	I1202 19:18:05.916068   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.916072   40272 command_runner.go:130] >       "pinned":  true
	I1202 19:18:05.916075   40272 command_runner.go:130] >     }
	I1202 19:18:05.916078   40272 command_runner.go:130] >   ]
	I1202 19:18:05.916081   40272 command_runner.go:130] > }
	I1202 19:18:05.916221   40272 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:18:05.916234   40272 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:18:05.916241   40272 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:18:05.916331   40272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:18:05.916421   40272 ssh_runner.go:195] Run: crio config
	I1202 19:18:05.964092   40272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 19:18:05.964119   40272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 19:18:05.964127   40272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 19:18:05.964130   40272 command_runner.go:130] > #
	I1202 19:18:05.964138   40272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 19:18:05.964149   40272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 19:18:05.964156   40272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 19:18:05.964166   40272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 19:18:05.964176   40272 command_runner.go:130] > # reload'.
	I1202 19:18:05.964182   40272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 19:18:05.964189   40272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 19:18:05.964197   40272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 19:18:05.964204   40272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 19:18:05.964210   40272 command_runner.go:130] > [crio]
	I1202 19:18:05.964216   40272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 19:18:05.964223   40272 command_runner.go:130] > # containers images, in this directory.
	I1202 19:18:05.964661   40272 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 19:18:05.964681   40272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 19:18:05.965195   40272 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 19:18:05.965213   40272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 19:18:05.965585   40272 command_runner.go:130] > # imagestore = ""
	I1202 19:18:05.965601   40272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 19:18:05.965614   40272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 19:18:05.966162   40272 command_runner.go:130] > # storage_driver = "overlay"
	I1202 19:18:05.966179   40272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 19:18:05.966186   40272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 19:18:05.966362   40272 command_runner.go:130] > # storage_option = [
	I1202 19:18:05.966573   40272 command_runner.go:130] > # ]
	I1202 19:18:05.966591   40272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 19:18:05.966598   40272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 19:18:05.966880   40272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 19:18:05.966894   40272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 19:18:05.966902   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 19:18:05.966914   40272 command_runner.go:130] > # always happen on a node reboot
	I1202 19:18:05.967066   40272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 19:18:05.967095   40272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 19:18:05.967102   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 19:18:05.967107   40272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 19:18:05.967213   40272 command_runner.go:130] > # version_file_persist = ""
	I1202 19:18:05.967225   40272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 19:18:05.967234   40272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 19:18:05.967423   40272 command_runner.go:130] > # internal_wipe = true
	I1202 19:18:05.967436   40272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 19:18:05.967449   40272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 19:18:05.967580   40272 command_runner.go:130] > # internal_repair = true
	I1202 19:18:05.967590   40272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 19:18:05.967596   40272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 19:18:05.967602   40272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 19:18:05.967753   40272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 19:18:05.967764   40272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 19:18:05.967767   40272 command_runner.go:130] > [crio.api]
	I1202 19:18:05.967773   40272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 19:18:05.967953   40272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 19:18:05.967969   40272 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 19:18:05.968134   40272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 19:18:05.968145   40272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 19:18:05.968169   40272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 19:18:05.968297   40272 command_runner.go:130] > # stream_port = "0"
	I1202 19:18:05.968307   40272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 19:18:05.968473   40272 command_runner.go:130] > # stream_enable_tls = false
	I1202 19:18:05.968483   40272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 19:18:05.968653   40272 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 19:18:05.968663   40272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 19:18:05.968669   40272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968775   40272 command_runner.go:130] > # stream_tls_cert = ""
	I1202 19:18:05.968785   40272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 19:18:05.968792   40272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968905   40272 command_runner.go:130] > # stream_tls_key = ""
	I1202 19:18:05.968915   40272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 19:18:05.968922   40272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 19:18:05.968926   40272 command_runner.go:130] > # automatically pick up the changes.
	I1202 19:18:05.969055   40272 command_runner.go:130] > # stream_tls_ca = ""
	I1202 19:18:05.969084   40272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969257   40272 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 19:18:05.969270   40272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969439   40272 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 19:18:05.969511   40272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 19:18:05.969528   40272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 19:18:05.969532   40272 command_runner.go:130] > [crio.runtime]
	I1202 19:18:05.969539   40272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 19:18:05.969544   40272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 19:18:05.969548   40272 command_runner.go:130] > # "nofile=1024:2048"
	I1202 19:18:05.969554   40272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 19:18:05.969676   40272 command_runner.go:130] > # default_ulimits = [
	I1202 19:18:05.969684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.969691   40272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 19:18:05.969900   40272 command_runner.go:130] > # no_pivot = false
	I1202 19:18:05.969912   40272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 19:18:05.969920   40272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 19:18:05.970109   40272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 19:18:05.970119   40272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 19:18:05.970124   40272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 19:18:05.970131   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970227   40272 command_runner.go:130] > # conmon = ""
	I1202 19:18:05.970236   40272 command_runner.go:130] > # Cgroup setting for conmon
	I1202 19:18:05.970244   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 19:18:05.970379   40272 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 19:18:05.970389   40272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 19:18:05.970395   40272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 19:18:05.970403   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970521   40272 command_runner.go:130] > # conmon_env = [
	I1202 19:18:05.970671   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970681   40272 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 19:18:05.970687   40272 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 19:18:05.970693   40272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 19:18:05.970697   40272 command_runner.go:130] > # default_env = [
	I1202 19:18:05.970827   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970837   40272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 19:18:05.970846   40272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 19:18:05.970995   40272 command_runner.go:130] > # selinux = false
	I1202 19:18:05.971005   40272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 19:18:05.971014   40272 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 19:18:05.971019   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971123   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.971133   40272 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 19:18:05.971140   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971283   40272 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 19:18:05.971297   40272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 19:18:05.971349   40272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 19:18:05.971394   40272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 19:18:05.971420   40272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 19:18:05.971426   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971532   40272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 19:18:05.971542   40272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 19:18:05.971554   40272 command_runner.go:130] > # the cgroup blockio controller.
	I1202 19:18:05.971691   40272 command_runner.go:130] > # blockio_config_file = ""
	I1202 19:18:05.971702   40272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 19:18:05.971706   40272 command_runner.go:130] > # blockio parameters.
	I1202 19:18:05.971888   40272 command_runner.go:130] > # blockio_reload = false
	I1202 19:18:05.971899   40272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 19:18:05.971911   40272 command_runner.go:130] > # irqbalance daemon.
	I1202 19:18:05.972089   40272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 19:18:05.972099   40272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 19:18:05.972107   40272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 19:18:05.972118   40272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 19:18:05.972238   40272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 19:18:05.972249   40272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 19:18:05.972255   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.972373   40272 command_runner.go:130] > # rdt_config_file = ""
	I1202 19:18:05.972382   40272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 19:18:05.972510   40272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 19:18:05.972521   40272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 19:18:05.972668   40272 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 19:18:05.972679   40272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 19:18:05.972686   40272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 19:18:05.972689   40272 command_runner.go:130] > # will be added.
	I1202 19:18:05.972804   40272 command_runner.go:130] > # default_capabilities = [
	I1202 19:18:05.972909   40272 command_runner.go:130] > # 	"CHOWN",
	I1202 19:18:05.973035   40272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 19:18:05.973186   40272 command_runner.go:130] > # 	"FSETID",
	I1202 19:18:05.973194   40272 command_runner.go:130] > # 	"FOWNER",
	I1202 19:18:05.973322   40272 command_runner.go:130] > # 	"SETGID",
	I1202 19:18:05.973468   40272 command_runner.go:130] > # 	"SETUID",
	I1202 19:18:05.973500   40272 command_runner.go:130] > # 	"SETPCAP",
	I1202 19:18:05.973632   40272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 19:18:05.973847   40272 command_runner.go:130] > # 	"KILL",
	I1202 19:18:05.973855   40272 command_runner.go:130] > # ]
	I1202 19:18:05.973864   40272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 19:18:05.973870   40272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 19:18:05.974039   40272 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 19:18:05.974052   40272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 19:18:05.974059   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974062   40272 command_runner.go:130] > default_sysctls = [
	I1202 19:18:05.974148   40272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 19:18:05.974179   40272 command_runner.go:130] > ]
	I1202 19:18:05.974185   40272 command_runner.go:130] > # List of devices on the host that a
	I1202 19:18:05.974297   40272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 19:18:05.974459   40272 command_runner.go:130] > # allowed_devices = [
	I1202 19:18:05.974492   40272 command_runner.go:130] > # 	"/dev/fuse",
	I1202 19:18:05.974497   40272 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 19:18:05.974500   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974505   40272 command_runner.go:130] > # List of additional devices. specified as
	I1202 19:18:05.974517   40272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 19:18:05.974706   40272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 19:18:05.974717   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974722   40272 command_runner.go:130] > # additional_devices = [
	I1202 19:18:05.974730   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974735   40272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 19:18:05.974870   40272 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 19:18:05.975061   40272 command_runner.go:130] > # 	"/etc/cdi",
	I1202 19:18:05.975069   40272 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 19:18:05.975204   40272 command_runner.go:130] > # ]
	I1202 19:18:05.975337   40272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 19:18:05.975610   40272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 19:18:05.975708   40272 command_runner.go:130] > # Defaults to false.
	I1202 19:18:05.975730   40272 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 19:18:05.975766   40272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 19:18:05.975927   40272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 19:18:05.976135   40272 command_runner.go:130] > # hooks_dir = [
	I1202 19:18:05.976173   40272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 19:18:05.976199   40272 command_runner.go:130] > # ]
	I1202 19:18:05.976222   40272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 19:18:05.976257   40272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 19:18:05.976344   40272 command_runner.go:130] > # its default mounts from the following two files:
	I1202 19:18:05.976363   40272 command_runner.go:130] > #
	I1202 19:18:05.976438   40272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 19:18:05.976465   40272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 19:18:05.976485   40272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 19:18:05.976561   40272 command_runner.go:130] > #
	I1202 19:18:05.976637   40272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 19:18:05.976658   40272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 19:18:05.976681   40272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 19:18:05.976711   40272 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 19:18:05.976797   40272 command_runner.go:130] > #
	I1202 19:18:05.976852   40272 command_runner.go:130] > # default_mounts_file = ""
	I1202 19:18:05.976886   40272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 19:18:05.976912   40272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 19:18:05.976930   40272 command_runner.go:130] > # pids_limit = -1
	I1202 19:18:05.977014   40272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 19:18:05.977040   40272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 19:18:05.977112   40272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 19:18:05.977136   40272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 19:18:05.977153   40272 command_runner.go:130] > # log_size_max = -1
	I1202 19:18:05.977240   40272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 19:18:05.977264   40272 command_runner.go:130] > # log_to_journald = false
	I1202 19:18:05.977344   40272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 19:18:05.977370   40272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 19:18:05.977390   40272 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 19:18:05.977478   40272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 19:18:05.977500   40272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 19:18:05.977570   40272 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 19:18:05.977596   40272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 19:18:05.977614   40272 command_runner.go:130] > # read_only = false
	I1202 19:18:05.977722   40272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 19:18:05.977797   40272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 19:18:05.977817   40272 command_runner.go:130] > # live configuration reload.
	I1202 19:18:05.977836   40272 command_runner.go:130] > # log_level = "info"
	I1202 19:18:05.977872   40272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 19:18:05.977956   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.978011   40272 command_runner.go:130] > # log_filter = ""
	I1202 19:18:05.978051   40272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978073   40272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 19:18:05.978093   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978128   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978214   40272 command_runner.go:130] > # uid_mappings = ""
	I1202 19:18:05.978236   40272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978257   40272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 19:18:05.978338   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978377   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978397   40272 command_runner.go:130] > # gid_mappings = ""
	I1202 19:18:05.978483   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 19:18:05.978556   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978583   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978606   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978700   40272 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 19:18:05.978728   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 19:18:05.978805   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978827   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978909   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978941   40272 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 19:18:05.979022   40272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 19:18:05.979049   40272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 19:18:05.979139   40272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 19:18:05.979164   40272 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 19:18:05.979239   40272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 19:18:05.979264   40272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 19:18:05.979291   40272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 19:18:05.979376   40272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 19:18:05.979411   40272 command_runner.go:130] > # drop_infra_ctr = true
	I1202 19:18:05.979493   40272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 19:18:05.979517   40272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 19:18:05.979541   40272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 19:18:05.979625   40272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 19:18:05.979649   40272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 19:18:05.979723   40272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 19:18:05.979744   40272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 19:18:05.979763   40272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 19:18:05.979845   40272 command_runner.go:130] > # shared_cpuset = ""
	I1202 19:18:05.979867   40272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 19:18:05.979937   40272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 19:18:05.979961   40272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 19:18:05.979983   40272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 19:18:05.980069   40272 command_runner.go:130] > # pinns_path = ""
	I1202 19:18:05.980091   40272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 19:18:05.980113   40272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 19:18:05.980205   40272 command_runner.go:130] > # enable_criu_support = true
	I1202 19:18:05.980225   40272 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 19:18:05.980246   40272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 19:18:05.980337   40272 command_runner.go:130] > # enable_pod_events = false
	I1202 19:18:05.980364   40272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 19:18:05.980435   40272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 19:18:05.980456   40272 command_runner.go:130] > # default_runtime = "crun"
	I1202 19:18:05.980476   40272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 19:18:05.980567   40272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 19:18:05.980641   40272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 19:18:05.980666   40272 command_runner.go:130] > # creation as a file is not desired either.
	I1202 19:18:05.980689   40272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 19:18:05.980782   40272 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 19:18:05.980807   40272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 19:18:05.980885   40272 command_runner.go:130] > # ]
	I1202 19:18:05.980907   40272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 19:18:05.980989   40272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 19:18:05.981060   40272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 19:18:05.981080   40272 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 19:18:05.981155   40272 command_runner.go:130] > #
	I1202 19:18:05.981180   40272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 19:18:05.981237   40272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 19:18:05.981273   40272 command_runner.go:130] > # runtime_type = "oci"
	I1202 19:18:05.981291   40272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 19:18:05.981311   40272 command_runner.go:130] > # inherit_default_runtime = false
	I1202 19:18:05.981423   40272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 19:18:05.981442   40272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 19:18:05.981461   40272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 19:18:05.981479   40272 command_runner.go:130] > # monitor_env = []
	I1202 19:18:05.981507   40272 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 19:18:05.981530   40272 command_runner.go:130] > # allowed_annotations = []
	I1202 19:18:05.981553   40272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 19:18:05.981571   40272 command_runner.go:130] > # no_sync_log = false
	I1202 19:18:05.981591   40272 command_runner.go:130] > # default_annotations = {}
	I1202 19:18:05.981620   40272 command_runner.go:130] > # stream_websockets = false
	I1202 19:18:05.981644   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.981733   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.981765   40272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 19:18:05.981785   40272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 19:18:05.981807   40272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 19:18:05.981914   40272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 19:18:05.981934   40272 command_runner.go:130] > #   in $PATH.
	I1202 19:18:05.981954   40272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 19:18:05.981989   40272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 19:18:05.982017   40272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 19:18:05.982034   40272 command_runner.go:130] > #   state.
	I1202 19:18:05.982057   40272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 19:18:05.982098   40272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 19:18:05.982128   40272 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 19:18:05.982148   40272 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 19:18:05.982168   40272 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 19:18:05.982199   40272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 19:18:05.982235   40272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 19:18:05.982255   40272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 19:18:05.982277   40272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 19:18:05.982307   40272 command_runner.go:130] > #   The currently recognized values are:
	I1202 19:18:05.982329   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 19:18:05.983678   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 19:18:05.983703   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 19:18:05.983795   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 19:18:05.983829   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 19:18:05.983905   40272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 19:18:05.983938   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 19:18:05.983958   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 19:18:05.983978   40272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 19:18:05.984011   40272 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 19:18:05.984040   40272 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 19:18:05.984061   40272 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 19:18:05.984082   40272 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 19:18:05.984114   40272 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 19:18:05.984143   40272 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 19:18:05.984168   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 19:18:05.984191   40272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 19:18:05.984220   40272 command_runner.go:130] > #   deprecated option "conmon".
	I1202 19:18:05.984244   40272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 19:18:05.984265   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 19:18:05.984298   40272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 19:18:05.984320   40272 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 19:18:05.984343   40272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 19:18:05.984373   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 19:18:05.984413   40272 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 19:18:05.984432   40272 command_runner.go:130] > #   conmon-rs by using:
	I1202 19:18:05.984470   40272 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 19:18:05.984495   40272 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 19:18:05.984515   40272 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 19:18:05.984549   40272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 19:18:05.984571   40272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 19:18:05.984595   40272 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 19:18:05.984630   40272 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 19:18:05.984653   40272 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 19:18:05.984677   40272 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 19:18:05.984716   40272 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 19:18:05.984737   40272 command_runner.go:130] > #   when a machine crash happens.
	I1202 19:18:05.984765   40272 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 19:18:05.984801   40272 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 19:18:05.984825   40272 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 19:18:05.984846   40272 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 19:18:05.984877   40272 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 19:18:05.984902   40272 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 19:18:05.984921   40272 command_runner.go:130] > #
	I1202 19:18:05.984958   40272 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 19:18:05.984976   40272 command_runner.go:130] > #
	I1202 19:18:05.984996   40272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 19:18:05.985026   40272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 19:18:05.985052   40272 command_runner.go:130] > #
	I1202 19:18:05.985075   40272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 19:18:05.985099   40272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 19:18:05.985125   40272 command_runner.go:130] > #
	I1202 19:18:05.985149   40272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 19:18:05.985169   40272 command_runner.go:130] > # feature.
	I1202 19:18:05.985199   40272 command_runner.go:130] > #
	I1202 19:18:05.985224   40272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 19:18:05.985244   40272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 19:18:05.985274   40272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 19:18:05.985304   40272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 19:18:05.985329   40272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 19:18:05.985349   40272 command_runner.go:130] > #
	I1202 19:18:05.985381   40272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 19:18:05.985404   40272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 19:18:05.985422   40272 command_runner.go:130] > #
	I1202 19:18:05.985454   40272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 19:18:05.985482   40272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 19:18:05.985497   40272 command_runner.go:130] > #
	I1202 19:18:05.985518   40272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 19:18:05.985550   40272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 19:18:05.985582   40272 command_runner.go:130] > # limitation.
	I1202 19:18:05.985602   40272 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 19:18:05.985622   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 19:18:05.985670   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985689   40272 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 19:18:05.985704   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985709   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985725   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985731   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985741   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985745   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985749   40272 command_runner.go:130] > allowed_annotations = [
	I1202 19:18:05.985754   40272 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 19:18:05.985759   40272 command_runner.go:130] > ]
	I1202 19:18:05.985765   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985769   40272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 19:18:05.985782   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 19:18:05.985786   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985795   40272 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 19:18:05.985801   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985810   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985821   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985829   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985833   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985837   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985845   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985852   40272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 19:18:05.985860   40272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 19:18:05.985867   40272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 19:18:05.985881   40272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 19:18:05.985892   40272 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 19:18:05.985905   40272 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 19:18:05.985915   40272 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 19:18:05.985926   40272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 19:18:05.985936   40272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 19:18:05.985947   40272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 19:18:05.985953   40272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 19:18:05.985964   40272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 19:18:05.985968   40272 command_runner.go:130] > # Example:
	I1202 19:18:05.985975   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 19:18:05.985980   40272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 19:18:05.985987   40272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 19:18:05.985993   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 19:18:05.985996   40272 command_runner.go:130] > # cpuset = "0-1"
	I1202 19:18:05.986000   40272 command_runner.go:130] > # cpushares = "5"
	I1202 19:18:05.986007   40272 command_runner.go:130] > # cpuquota = "1000"
	I1202 19:18:05.986011   40272 command_runner.go:130] > # cpuperiod = "100000"
	I1202 19:18:05.986014   40272 command_runner.go:130] > # cpulimit = "35"
	I1202 19:18:05.986018   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.986025   40272 command_runner.go:130] > # The workload name is workload-type.
	I1202 19:18:05.986033   40272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 19:18:05.986041   40272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 19:18:05.986047   40272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 19:18:05.986057   40272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 19:18:05.986069   40272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 19:18:05.986075   40272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 19:18:05.986082   40272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 19:18:05.986086   40272 command_runner.go:130] > # Default value is set to true
	I1202 19:18:05.986096   40272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 19:18:05.986102   40272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 19:18:05.986107   40272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 19:18:05.986117   40272 command_runner.go:130] > # Default value is set to 'false'
	I1202 19:18:05.986121   40272 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 19:18:05.986127   40272 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 19:18:05.986137   40272 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 19:18:05.986142   40272 command_runner.go:130] > # timezone = ""
	I1202 19:18:05.986151   40272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 19:18:05.986154   40272 command_runner.go:130] > #
	I1202 19:18:05.986160   40272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 19:18:05.986171   40272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 19:18:05.986178   40272 command_runner.go:130] > [crio.image]
	I1202 19:18:05.986184   40272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 19:18:05.986189   40272 command_runner.go:130] > # default_transport = "docker://"
	I1202 19:18:05.986197   40272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 19:18:05.986205   40272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986212   40272 command_runner.go:130] > # global_auth_file = ""
	I1202 19:18:05.986217   40272 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 19:18:05.986223   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986230   40272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.986237   40272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 19:18:05.986243   40272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986248   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986255   40272 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 19:18:05.986260   40272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 19:18:05.986266   40272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 19:18:05.986275   40272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 19:18:05.986281   40272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 19:18:05.986291   40272 command_runner.go:130] > # pause_command = "/pause"
	I1202 19:18:05.986301   40272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 19:18:05.986309   40272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 19:18:05.986319   40272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 19:18:05.986324   40272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 19:18:05.986331   40272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 19:18:05.986337   40272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 19:18:05.986343   40272 command_runner.go:130] > # pinned_images = [
	I1202 19:18:05.986346   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986352   40272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 19:18:05.986360   40272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 19:18:05.986367   40272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 19:18:05.986376   40272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 19:18:05.986381   40272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 19:18:05.986388   40272 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 19:18:05.986394   40272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 19:18:05.986401   40272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 19:18:05.986415   40272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 19:18:05.986422   40272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 19:18:05.986431   40272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 19:18:05.986436   40272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 19:18:05.986442   40272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 19:18:05.986452   40272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 19:18:05.986456   40272 command_runner.go:130] > # changing them here.
	I1202 19:18:05.986462   40272 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 19:18:05.986468   40272 command_runner.go:130] > # insecure_registries = [
	I1202 19:18:05.986472   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986478   40272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 19:18:05.986486   40272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 19:18:05.986490   40272 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 19:18:05.986495   40272 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 19:18:05.986499   40272 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 19:18:05.986505   40272 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 19:18:05.986518   40272 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 19:18:05.986525   40272 command_runner.go:130] > # auto_reload_registries = false
	I1202 19:18:05.986531   40272 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 19:18:05.986543   40272 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 19:18:05.986549   40272 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 19:18:05.986556   40272 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 19:18:05.986561   40272 command_runner.go:130] > # The mode of short name resolution.
	I1202 19:18:05.986568   40272 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 19:18:05.986578   40272 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 19:18:05.986583   40272 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 19:18:05.986588   40272 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 19:18:05.986593   40272 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 19:18:05.986602   40272 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 19:18:05.986606   40272 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 19:18:05.986612   40272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 19:18:05.986619   40272 command_runner.go:130] > # CNI plugins.
	I1202 19:18:05.986623   40272 command_runner.go:130] > [crio.network]
	I1202 19:18:05.986629   40272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 19:18:05.986637   40272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 19:18:05.986640   40272 command_runner.go:130] > # cni_default_network = ""
	I1202 19:18:05.986646   40272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 19:18:05.986655   40272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 19:18:05.986661   40272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 19:18:05.986664   40272 command_runner.go:130] > # plugin_dirs = [
	I1202 19:18:05.986668   40272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 19:18:05.986674   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986678   40272 command_runner.go:130] > # List of included pod metrics.
	I1202 19:18:05.986681   40272 command_runner.go:130] > # included_pod_metrics = [
	I1202 19:18:05.986684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986690   40272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 19:18:05.986696   40272 command_runner.go:130] > [crio.metrics]
	I1202 19:18:05.986701   40272 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 19:18:05.986705   40272 command_runner.go:130] > # enable_metrics = false
	I1202 19:18:05.986718   40272 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 19:18:05.986723   40272 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 19:18:05.986732   40272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 19:18:05.986738   40272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 19:18:05.986744   40272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 19:18:05.986748   40272 command_runner.go:130] > # metrics_collectors = [
	I1202 19:18:05.986753   40272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 19:18:05.986760   40272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 19:18:05.986764   40272 command_runner.go:130] > # 	"containers_oom_total",
	I1202 19:18:05.986768   40272 command_runner.go:130] > # 	"processes_defunct",
	I1202 19:18:05.986777   40272 command_runner.go:130] > # 	"operations_total",
	I1202 19:18:05.986782   40272 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 19:18:05.986787   40272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 19:18:05.986793   40272 command_runner.go:130] > # 	"operations_errors_total",
	I1202 19:18:05.986797   40272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 19:18:05.986802   40272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 19:18:05.986809   40272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 19:18:05.986814   40272 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 19:18:05.986819   40272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 19:18:05.986823   40272 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 19:18:05.986829   40272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 19:18:05.986836   40272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 19:18:05.986840   40272 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 19:18:05.986844   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986852   40272 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 19:18:05.986862   40272 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 19:18:05.986870   40272 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 19:18:05.986877   40272 command_runner.go:130] > # metrics_port = 9090
	I1202 19:18:05.986882   40272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 19:18:05.986886   40272 command_runner.go:130] > # metrics_socket = ""
	I1202 19:18:05.986893   40272 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 19:18:05.986899   40272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 19:18:05.986906   40272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 19:18:05.986918   40272 command_runner.go:130] > # certificate on any modification event.
	I1202 19:18:05.986933   40272 command_runner.go:130] > # metrics_cert = ""
	I1202 19:18:05.986939   40272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 19:18:05.986947   40272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 19:18:05.986950   40272 command_runner.go:130] > # metrics_key = ""
	I1202 19:18:05.986956   40272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 19:18:05.986962   40272 command_runner.go:130] > [crio.tracing]
	I1202 19:18:05.986967   40272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 19:18:05.986972   40272 command_runner.go:130] > # enable_tracing = false
	I1202 19:18:05.986979   40272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 19:18:05.986984   40272 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 19:18:05.986990   40272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 19:18:05.986997   40272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 19:18:05.987001   40272 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 19:18:05.987007   40272 command_runner.go:130] > [crio.nri]
	I1202 19:18:05.987011   40272 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 19:18:05.987015   40272 command_runner.go:130] > # enable_nri = true
	I1202 19:18:05.987019   40272 command_runner.go:130] > # NRI socket to listen on.
	I1202 19:18:05.987029   40272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 19:18:05.987033   40272 command_runner.go:130] > # NRI plugin directory to use.
	I1202 19:18:05.987037   40272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 19:18:05.987045   40272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 19:18:05.987050   40272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 19:18:05.987056   40272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 19:18:05.987116   40272 command_runner.go:130] > # nri_disable_connections = false
	I1202 19:18:05.987126   40272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 19:18:05.987130   40272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 19:18:05.987136   40272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 19:18:05.987142   40272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 19:18:05.987147   40272 command_runner.go:130] > # NRI default validator configuration.
	I1202 19:18:05.987157   40272 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 19:18:05.987166   40272 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 19:18:05.987170   40272 command_runner.go:130] > # can be restricted/rejected:
	I1202 19:18:05.987178   40272 command_runner.go:130] > # - OCI hook injection
	I1202 19:18:05.987186   40272 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 19:18:05.987191   40272 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 19:18:05.987196   40272 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 19:18:05.987203   40272 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 19:18:05.987209   40272 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 19:18:05.987216   40272 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 19:18:05.987225   40272 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 19:18:05.987230   40272 command_runner.go:130] > #
	I1202 19:18:05.987234   40272 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 19:18:05.987239   40272 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 19:18:05.987245   40272 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 19:18:05.987254   40272 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 19:18:05.987260   40272 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 19:18:05.987268   40272 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 19:18:05.987279   40272 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 19:18:05.987283   40272 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 19:18:05.987286   40272 command_runner.go:130] > # ]
	I1202 19:18:05.987291   40272 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 19:18:05.987299   40272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 19:18:05.987302   40272 command_runner.go:130] > [crio.stats]
	I1202 19:18:05.987308   40272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 19:18:05.987316   40272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 19:18:05.987320   40272 command_runner.go:130] > # stats_collection_period = 0
	I1202 19:18:05.987326   40272 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 19:18:05.987334   40272 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 19:18:05.987344   40272 command_runner.go:130] > # collection_period = 0
	I1202 19:18:05.987392   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941536561Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 19:18:05.987405   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941573139Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 19:18:05.987421   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941598771Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 19:18:05.987431   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941629007Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 19:18:05.987447   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.94184771Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.987460   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.942236436Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 19:18:05.987477   40272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 19:18:05.987606   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:05.987620   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:05.987644   40272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:18:05.987670   40272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:18:05.987799   40272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:18:05.987877   40272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:18:05.995250   40272 command_runner.go:130] > kubeadm
	I1202 19:18:05.995271   40272 command_runner.go:130] > kubectl
	I1202 19:18:05.995276   40272 command_runner.go:130] > kubelet
	I1202 19:18:05.995308   40272 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:18:05.995379   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:18:06.002605   40272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:18:06.015240   40272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:18:06.033933   40272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:18:06.047469   40272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:18:06.051453   40272 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 19:18:06.051580   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:06.161840   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:06.543709   40272 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:18:06.543774   40272 certs.go:195] generating shared ca certs ...
	I1202 19:18:06.543803   40272 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:06.543968   40272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:18:06.544037   40272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:18:06.544058   40272 certs.go:257] generating profile certs ...
	I1202 19:18:06.544203   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:18:06.544311   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:18:06.544381   40272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:18:06.544424   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:18:06.544458   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:18:06.544493   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:18:06.544537   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:18:06.544570   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:18:06.544599   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:18:06.544648   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:18:06.544683   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:18:06.544773   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:18:06.544828   40272 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:18:06.544854   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:18:06.544932   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:18:06.551062   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:18:06.551141   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:18:06.551220   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:06.551261   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.551291   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.551312   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.552213   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:18:06.569384   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:18:06.587883   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:18:06.609527   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:18:06.628039   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:18:06.644623   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:18:06.662478   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:18:06.679440   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:18:06.696330   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:18:06.713584   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:18:06.731033   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:18:06.747714   40272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:18:06.761265   40272 ssh_runner.go:195] Run: openssl version
	I1202 19:18:06.766652   40272 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 19:18:06.767017   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:18:06.774639   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.777834   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778051   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778107   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.818127   40272 command_runner.go:130] > b5213941
	I1202 19:18:06.818625   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:18:06.826391   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:18:06.834719   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838324   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838367   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838418   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.878978   40272 command_runner.go:130] > 51391683
	I1202 19:18:06.879420   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:18:06.887230   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:18:06.895470   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899261   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899287   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899335   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.940199   40272 command_runner.go:130] > 3ec20f2e
	I1202 19:18:06.940694   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:18:06.948359   40272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951793   40272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951816   40272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 19:18:06.951822   40272 command_runner.go:130] > Device: 259,1	Inode: 1315539     Links: 1
	I1202 19:18:06.951851   40272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:06.951865   40272 command_runner.go:130] > Access: 2025-12-02 19:13:58.595474405 +0000
	I1202 19:18:06.951871   40272 command_runner.go:130] > Modify: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951876   40272 command_runner.go:130] > Change: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951881   40272 command_runner.go:130] >  Birth: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951960   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:18:06.996850   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:06.997318   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:18:07.037433   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.037885   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:18:07.078161   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.078666   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:18:07.119364   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.119441   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:18:07.159628   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.160136   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:18:07.204176   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.204662   40272 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:07.204768   40272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:18:07.204851   40272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:18:07.233427   40272 cri.go:89] found id: ""
	I1202 19:18:07.233514   40272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:18:07.240330   40272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 19:18:07.240352   40272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 19:18:07.240359   40272 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 19:18:07.241346   40272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:18:07.241363   40272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:18:07.241437   40272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:18:07.248549   40272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:18:07.248941   40272 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249040   40272 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "functional-374330" cluster setting kubeconfig missing "functional-374330" context setting]
	I1202 19:18:07.249312   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.249749   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249896   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.250443   40272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:18:07.250467   40272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:18:07.250474   40272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:18:07.250478   40272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:18:07.250487   40272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:18:07.250526   40272 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:18:07.250793   40272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:18:07.258519   40272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:18:07.258557   40272 kubeadm.go:602] duration metric: took 17.188352ms to restartPrimaryControlPlane
	I1202 19:18:07.258569   40272 kubeadm.go:403] duration metric: took 53.913832ms to StartCluster
	I1202 19:18:07.258583   40272 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.258647   40272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.259281   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.259482   40272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:18:07.259876   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:07.259927   40272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:18:07.259993   40272 addons.go:70] Setting storage-provisioner=true in profile "functional-374330"
	I1202 19:18:07.260007   40272 addons.go:239] Setting addon storage-provisioner=true in "functional-374330"
	I1202 19:18:07.260034   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.260061   40272 addons.go:70] Setting default-storageclass=true in profile "functional-374330"
	I1202 19:18:07.260107   40272 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-374330"
	I1202 19:18:07.260433   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.260513   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.266365   40272 out.go:179] * Verifying Kubernetes components...
	I1202 19:18:07.269343   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:07.293348   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.293507   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.293796   40272 addons.go:239] Setting addon default-storageclass=true in "functional-374330"
	I1202 19:18:07.293827   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.294253   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.304761   40272 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:18:07.307700   40272 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.307724   40272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:18:07.307789   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.332842   40272 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:07.332860   40272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:18:07.332914   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.347890   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.373144   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.469482   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:07.472955   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.515784   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.293178   40272 node_ready.go:35] waiting up to 6m0s for node "functional-374330" to be "Ready" ...
	I1202 19:18:08.293301   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.293355   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.293568   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293595   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293615   40272 retry.go:31] will retry after 144.187129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293684   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293702   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293710   40272 retry.go:31] will retry after 132.365923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.427169   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.438559   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.510555   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513791   40272 retry.go:31] will retry after 461.570102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513742   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513825   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513833   40272 retry.go:31] will retry after 354.67857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.794133   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.794203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.868974   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.929070   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.932369   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.932402   40272 retry.go:31] will retry after 765.19043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.975575   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.036469   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.042296   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.042376   40272 retry.go:31] will retry after 433.124039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.293618   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.293713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:09.476440   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.538441   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.541412   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.541444   40272 retry.go:31] will retry after 747.346338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.698768   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:09.764666   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.764703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.764723   40272 retry.go:31] will retry after 541.76994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.793827   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.793965   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.794261   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:10.289986   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:10.293340   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.293732   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:10.293780   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:10.307063   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:10.373573   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.373608   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.373627   40272 retry.go:31] will retry after 1.037281057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388739   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.388813   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388864   40272 retry.go:31] will retry after 1.072570226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.794280   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.794348   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.794651   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.293375   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.293466   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.293739   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.411088   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:11.462503   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:11.470558   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.470603   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.470624   40272 retry.go:31] will retry after 2.459470693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530455   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.530510   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530529   40272 retry.go:31] will retry after 2.35440359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.794013   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.794477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:12.294194   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.294271   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:12.294648   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:12.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.793567   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.793595   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.793686   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.794006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.885433   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:13.930854   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:13.940303   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:13.943330   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:13.943359   40272 retry.go:31] will retry after 2.562469282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000907   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:14.000951   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000969   40272 retry.go:31] will retry after 3.172954134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.294316   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.294381   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:14.793366   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.793435   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.793778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:14.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:15.293495   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:15.793590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.793675   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.794004   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.293435   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.506093   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:16.576298   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:16.580372   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.580403   40272 retry.go:31] will retry after 6.193423377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.793925   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.794050   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:16.794410   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:17.174990   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:17.234065   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:17.234161   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.234184   40272 retry.go:31] will retry after 6.017051757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.293565   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.293640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:17.793940   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.794318   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.294120   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.294191   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.294497   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.794258   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.794341   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.794641   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:18.794693   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:19.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:19.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.793693   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.794032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.293712   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.793838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:21.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:21.293929   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:21.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.293417   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.774666   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:22.793983   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.794053   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.835259   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:22.835293   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:22.835313   40272 retry.go:31] will retry after 8.891499319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.251502   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:23.293920   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.293995   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.294305   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:23.294361   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:23.316803   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:23.325390   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.325420   40272 retry.go:31] will retry after 5.436174555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.794140   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.794209   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.794514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.294165   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.294234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.294532   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.794307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.794552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:25.294405   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.294476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.294786   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:25.294838   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:25.793518   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.793593   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.793954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.293881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.793441   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.793515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.793898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.293636   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.294038   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.793924   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.793994   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.794242   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:27.794290   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:28.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.294085   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.294398   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.762126   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:28.793717   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.794058   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.820417   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:28.820461   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:28.820480   40272 retry.go:31] will retry after 5.23527752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:29.294048   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.294387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:29.794183   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.794303   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.794634   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:29.794706   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:30.294267   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.294340   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.294624   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:30.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.793398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.793762   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.293841   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.727474   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:31.785329   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:31.788538   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.788571   40272 retry.go:31] will retry after 14.027342391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.793764   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.793834   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.794170   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:32.293926   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.293991   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.294245   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:32.294283   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:32.794305   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.794380   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.794731   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.293682   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.294006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:34.056328   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:34.114988   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:34.115034   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.115053   40272 retry.go:31] will retry after 20.825216377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.294372   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.294768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:34.294823   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:34.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.293815   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.293900   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.294151   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.793855   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.793935   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.794205   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.293483   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.793564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.793873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:36.793925   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:37.293668   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.293762   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.294075   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:37.793947   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.794293   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.294087   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.294335   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.794481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:38.794533   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:39.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.294563   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:39.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.794411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.794661   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.793560   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.793636   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:41.293642   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:41.294091   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:41.793737   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.793809   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.794119   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:42.294249   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.294351   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.295481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1202 19:18:42.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.794309   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.794549   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:43.294307   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.294779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:43.294833   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:43.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.793526   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.293539   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.293609   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.293775   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.294288   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.794074   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.794139   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:45.794427   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:45.816754   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:45.885215   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:45.888326   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:45.888364   40272 retry.go:31] will retry after 11.821193731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:46.293908   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.293987   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.294332   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:46.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.794188   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.794450   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.294325   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.294656   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.793465   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:48.293461   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.293549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:48.293980   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:48.793521   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.793585   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.793925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.293671   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.293755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.294085   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.793786   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.793857   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.794203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:50.293936   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.294005   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.294362   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:50.794095   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.794170   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.794494   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.294326   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.294720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:52.793945   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:53.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.293667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.293927   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:53.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.793852   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.794188   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.294005   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.294075   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.294426   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.794205   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.794284   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.794553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:54.794600   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:54.941002   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:55.004086   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:55.004129   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.004148   40272 retry.go:31] will retry after 20.918145005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.293488   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.293564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.293885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:55.793617   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.793707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.794018   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.293767   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.793648   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.793755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.794090   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:57.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.293891   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.294211   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:57.294263   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:57.710107   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:57.765891   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:57.765928   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.765947   40272 retry.go:31] will retry after 13.115816401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.793988   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.794063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.794301   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.294217   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.793430   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.793738   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.293442   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.293550   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.793871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:59.793930   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:00.295673   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.295757   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.296162   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:00.793971   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.794393   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.294295   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.294639   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.793817   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:02.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:02.293931   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:02.793522   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.793600   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.293690   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.293758   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.294007   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.793884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:04.293572   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:04.294031   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:04.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.793792   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.793473   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.793568   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.793916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.293673   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.293971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.793528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:06.793897   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:07.293734   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.293806   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.294152   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:07.793956   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.794035   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.794289   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.294051   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.294130   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.294477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.794232   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.794588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:08.794644   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:09.294344   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.294413   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.294705   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:09.793394   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.882157   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:10.938212   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:10.938272   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:10.938296   40272 retry.go:31] will retry after 16.990081142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:11.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.293533   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:11.293912   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:11.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.793893   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.293805   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.793829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:13.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.293887   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:13.293939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:13.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.793901   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.293451   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.293545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.793538   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.793612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.793947   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.293500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.293781   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:15.793881   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:15.923138   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:15.976380   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:15.979446   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:15.979475   40272 retry.go:31] will retry after 43.938975662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:16.293891   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.293966   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.294319   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:16.793918   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.794007   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.794273   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.293817   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.293889   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.294222   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.794224   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.794322   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.794659   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:17.794718   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:18.293644   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.293745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:18.793819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.793896   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.794214   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.294047   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.294429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.794155   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.794251   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.794516   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:20.294336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.294409   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.294750   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:20.294804   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:20.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.293392   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.793880   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.793814   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.794072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:22.794110   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:23.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.293552   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:23.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.793520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.293676   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.793402   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.793777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:25.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:25.293933   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:25.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.793822   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.293870   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.794001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.293786   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.293876   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:27.294188   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:27.794144   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.794229   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.928884   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:27.980862   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983877   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983967   40272 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:28.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.293635   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.293939   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:28.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.293888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:29.793943   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:30.293604   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.293690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.293949   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:30.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.793541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.793879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.293681   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.294045   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.793596   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:31.793973   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:32.293633   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.293736   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.294100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:32.794048   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.794127   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.794454   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.294107   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.294193   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.294469   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.794161   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.794241   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.794576   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:33.794630   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:34.294318   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.294390   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.294756   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:34.793348   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.793816   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.293934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.793853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:36.293403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.293796   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:36.293849   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:36.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.793604   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.793910   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.293819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.293921   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.294237   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.793992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.794062   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.794317   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:38.294129   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.294219   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.294552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:38.294607   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:38.794375   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.794449   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.794753   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.293464   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.793609   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.793726   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.793971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:40.794046   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:41.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.293783   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.294101   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:41.793762   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.793835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.794208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.293532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.793895   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.793974   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.794274   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:42.794330   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:43.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.293536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:43.793403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.793470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.793794   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.793570   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.793981   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:45.293992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.294153   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.294968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:45.295095   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:45.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.793517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.293433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.793672   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.794005   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.294181   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.794191   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.794264   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.794574   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:47.794634   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:48.294351   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.294414   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.294658   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:48.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.793458   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.293548   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.293622   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.793638   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.793723   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.793982   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:50.293669   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.293738   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.294063   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:50.294115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:50.793649   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.794030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.293404   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.293477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.793444   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.293605   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.293689   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.794056   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.794307   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:52.794355   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:53.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.294542   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:53.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.794789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.293367   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.293448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.793399   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:55.293465   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.293912   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:55.293970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:55.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.793748   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.293378   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.293444   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.293784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.793485   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:57.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.293823   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:57.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:57.794072   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.794142   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.294203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.294515   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.794402   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.794662   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.293346   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.293443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.793412   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:59.793894   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:59.919155   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:59.978732   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978768   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978842   40272 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:59.981270   40272 out.go:179] * Enabled addons: 
	I1202 19:19:59.984008   40272 addons.go:530] duration metric: took 1m52.724080055s for enable addons: enabled=[]
	I1202 19:20:00.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.319155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=25
	I1202 19:20:00.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.793581   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.293643   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.294269   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.794085   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:01.794475   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:02.294283   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.294801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:02.793839   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.793918   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.794224   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.293780   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.293848   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.294097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.793818   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.793890   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.794190   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:04.294069   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.294138   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.294439   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:04.294488   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:04.794180   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.794261   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.794525   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.294270   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.294339   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.294637   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.793358   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.793447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.793770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.794145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:06.794195   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:07.293975   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.294054   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.294413   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:07.794308   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.794425   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.794772   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.293671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.294020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:09.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.293769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:09.293828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:09.794253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.794326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.794686   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:11.293475   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.293548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:11.293934   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:11.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.293544   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.293610   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.293915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.793833   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.793916   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.794241   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:13.293799   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.293872   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.294179   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:13.294238   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:13.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.794022   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.794276   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.294026   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.294105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.294453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.794135   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.794207   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:15.294253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.294326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:15.294638   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:15.793355   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.793426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.793551   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.793621   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.293774   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.293867   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.794117   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.794213   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.794539   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:17.794594   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:18.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.294374   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:18.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.794070   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:20.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.293900   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:20.293961   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:20.793436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.293924   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.793463   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.793956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.293478   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.793771   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:22.793827   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:23.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:23.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.293436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.293506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:24.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:25.293608   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.293707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.294025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:25.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.794022   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:26.794082   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:27.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.293785   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.294032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:27.793959   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.294157   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.294237   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.294582   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.794354   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.794429   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.794706   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:28.794758   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:29.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:29.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.293432   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.293782   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.793582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:31.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.293580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:31.293985   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:31.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.793797   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.793874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.794194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:33.293954   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.294018   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.294268   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:33.294307   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:33.794022   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.794093   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.794394   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.294075   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.294145   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.294479   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.794081   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.794161   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.794411   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:35.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.294307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.294631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:35.294684   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:35.794291   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.794361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.794710   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.294383   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.294672   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.793869   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.293817   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.294175   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.794113   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.794365   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:37.794404   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:38.294151   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.294567   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:38.794364   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.794441   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.794795   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.794051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:40.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.293749   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:40.294131   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:40.793755   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.794137   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.293804   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.293874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.294208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.794044   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.794437   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:42.294271   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.294354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.294638   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:42.294682   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:42.793464   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.293529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.293884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.793555   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.793904   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.293677   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.793724   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.793796   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:44.794158   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:45.293768   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.293839   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.294135   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:45.794039   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.294279   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.294679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.793388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.793455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:47.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.293786   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.294051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:47.294093   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:47.794031   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.794101   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.294153   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.294227   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.294472   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.794239   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.794680   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.293461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.293815   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.793404   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.793801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:49.793850   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:50.293494   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.293926   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:50.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.793579   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.293925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.794124   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:51.794181   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:52.293850   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.293930   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.294277   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:52.794083   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.794149   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.794406   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.294121   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.294195   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.294529   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.794350   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.794679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:53.794733   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:54.293471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.293541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:54.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:56.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.293455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:56.293831   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:56.793498   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.793574   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.793934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.293700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.293941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.793858   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.793928   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.794244   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:58.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.294083   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.294416   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:58.294470   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:58.794152   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.794222   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.794483   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.294312   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.294645   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.794292   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.794364   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.794674   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.293476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.293799   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.793832   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:00.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:01.293577   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:01.793727   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.793804   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.293823   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.293903   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.294253   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.794285   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.794354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.794650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:02.794701   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:03.293400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.293470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:03.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.293824   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.793783   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:05.293327   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.293398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:05.293767   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:05.794396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.794464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.794774   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.293683   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.793543   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:07.293810   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.293905   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.294228   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:07.294294   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:07.794228   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.794296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.794557   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.294314   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.294391   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.294721   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.793513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.293515   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.793507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.793849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:09.793915   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:10.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.293946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:10.793633   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.793713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.794014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.293862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.293767   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:12.293819   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:12.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.293560   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.293641   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:14.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.293853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:14.293920   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:14.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.293520   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.293586   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.793540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.793613   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:16.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.293615   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:16.293998   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:16.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.293689   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.293770   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.793898   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.793968   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.794294   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:18.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.294082   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.294374   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:18.294428   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:18.794173   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.794258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.794584   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.294375   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.294447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.294755   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.793492   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.793769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.793542   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.793614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.793957   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:20.794013   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:21.293675   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.293740   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:21.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.293837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.793766   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.793836   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.794155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:22.794204   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:23.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:23.793615   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.794078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.793860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:25.293571   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.293642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.293963   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:25.294010   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:25.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.793479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.793840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.793506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:27.293759   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.294093   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:27.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:27.794030   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.794105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.794432   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.294126   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.294546   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.794342   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.794587   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.293336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.793558   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:29.794070   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:30.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.293704   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:30.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.793500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:32.293467   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.293899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:32.293955   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:32.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.793527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.293566   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.293634   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.793481   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.793759   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:34.793805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:35.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.293507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:35.793599   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.793691   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.293780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.793879   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.793947   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.794270   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:36.794327   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:37.294002   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.294382   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:37.794293   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.794366   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.794623   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.293793   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.793479   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.793551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.793911   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:39.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:39.293900   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:39.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.793400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.793469   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.293410   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.293820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.793779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:41.793832   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:42.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:42.793809   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.793881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.794230   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.794300   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.794607   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:43.794654   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:44.294246   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.294318   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:44.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.793399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.793724   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.793836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:46.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.293848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:46.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:46.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.793766   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.293717   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.294035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.793981   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.794397   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:48.293997   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.294340   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:48.294384   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:48.794112   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.794192   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.794535   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.294292   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.794401   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.794648   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.293343   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.293431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.293749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.793332   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.793431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.793733   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:50.793781   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:51.294382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.294749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:51.794404   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.794484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.794827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.793741   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.794061   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:52.794098   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:53.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.293502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.293842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:53.793547   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.793619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.293686   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.293772   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:55.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.293522   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:55.293916   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:55.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.793966   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.793700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.794037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:57.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.293812   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.294147   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:57.294199   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:57.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.794029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.794360   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.294144   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.294215   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.294530   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.794311   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.794384   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.794669   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.293382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.293457   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.793915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:59.793970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:00.294203   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.294291   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:00.794373   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.794448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.794765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.793408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:02.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.293521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.293831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:02.293882   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:02.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.793524   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.294092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.793779   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.793863   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:04.294013   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.294096   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.294427   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:04.294479   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:04.794192   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.794518   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.294290   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.294361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.294692   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.293537   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.293889   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.793886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:06.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:07.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.293561   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:07.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.794431   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.294315   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.793325   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.793395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:09.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:09.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:09.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.793938   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.293512   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.293605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.293914   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.793473   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:11.293419   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:11.293911   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:11.793571   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.793667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.793998   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.293707   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.294044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.794038   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.794457   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:13.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.294294   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.294608   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:13.294662   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:13.793319   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.793385   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.793631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.293401   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.793974   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.293634   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.293715   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.294019   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.793580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.793905   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:15.793957   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:16.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.293753   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.294105   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:16.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.794139   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.294035   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.294104   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.294447   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.794420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.794500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.794802   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:17.794864   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:18.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:18.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.793908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.793487   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:20.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:20.294043   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:20.793747   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.793818   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.293829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.294078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.793486   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.293599   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.293684   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.293961   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.793847   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.793919   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.794173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:22.794221   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:23.294004   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.294391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:23.794182   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.794569   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.294310   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.294382   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.294678   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:25.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.293849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:25.293899   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:25.793411   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.793784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.293511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:27.293716   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.293790   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:27.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:27.794020   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.794114   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.294228   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.294302   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.294604   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.794372   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.794442   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.793369   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.793452   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.793775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:29.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:30.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:30.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.793820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.293618   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.293975   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.793639   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.793724   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.794026   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:31.794076   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:32.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.293867   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:32.793458   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.793534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.293479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.293808   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.793577   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:34.293638   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.293733   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.294053   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:34.294138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:34.793757   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.794123   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.293805   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.293875   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.294212   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.793796   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.793870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.794183   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:36.293916   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.293981   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.294225   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:36.294266   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:36.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.794051   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.794349   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.294147   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.294225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.294553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.794437   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.794726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.293504   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.793561   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.793979   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:38.794037   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:39.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.293812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:39.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.793508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.293825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.793461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.793725   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:41.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:41.293919   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:41.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.306206   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.306286   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.306588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.793842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:43.293564   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:43.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:43.793719   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.794033   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.293420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.293840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.794225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.794573   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.293335   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.293432   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.293823   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.793584   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.793699   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.794020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:45.794077   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:46.293765   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.294194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:46.793979   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.294352   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.294421   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.294757   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.793514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:48.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.293488   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:48.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:48.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.793896   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.793746   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.794140   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:50.293958   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.294029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.294356   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:50.794160   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.794234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.794577   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.294330   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.294654   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.793400   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.293818   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.793765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:52.793817   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:53.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:53.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.793594   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.793990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.293543   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.293619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.293933   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.793885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:54.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:55.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.293897   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:55.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.793627   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.293469   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.293845   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.793575   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.793643   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.793943   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:56.793996   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:57.293776   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.293861   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:57.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.794158   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.294275   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.294346   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.294665   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.793386   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.793763   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:59.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.293903   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:59.293962   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:59.793451   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.793525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.296332   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.296406   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.296694   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.293498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.793424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:01.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:02.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.293637   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.294144   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:02.793976   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.794047   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.294017   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.294088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.294379   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.794118   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.794444   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:03.794495   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:04.294106   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.294176   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.294496   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:04.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.794365   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.794711   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.793605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.793941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:06.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.293719   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.294067   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:06.294117   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:06.793866   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.793938   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.293887   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.293967   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.294287   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.794150   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.794403   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:08.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.294258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.294594   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:08.294647   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:08.793335   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.793404   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.793760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.793478   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.293956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.793532   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.793599   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:10.793903   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:11.293547   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.293625   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:11.793691   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.793764   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.794076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.793673   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.794066   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:12.794115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:13.293795   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.293870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.294207   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:13.793969   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.794283   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.294039   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.294109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.294436   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.794094   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.794171   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.794488   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:14.794541   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:15.294282   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.294357   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.294611   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:15.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.794443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.794770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.293836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.793477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:17.293700   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:17.294109   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:17.793903   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.793973   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.794593   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.294328   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.294646   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.793322   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.793392   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.793726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.793807   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:19.793870   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:20.293525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.293596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:20.793525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.793601   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.793946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.293705   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.294002   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.793707   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.793780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.794097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:21.794151   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:22.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.293892   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.294246   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:22.794023   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.794088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.794347   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.294098   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.294169   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.294495   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.794344   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.794436   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.794764   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:23.794818   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:24.293402   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.293471   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:24.793418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.793495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.293624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.293973   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.793669   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.793735   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.793985   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:26.293681   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.293789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.294111   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:26.294163   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:26.793710   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.793789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.794114   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.293843   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.293914   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.294239   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.794080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.794155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.794487   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:28.294258   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.294337   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.294650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:28.294705   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:28.793349   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.793701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.294241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.294701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.293509   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.293886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:30.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:31.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:31.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.293492   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.293560   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:33.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.293569   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:33.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:33.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.293678   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.294103   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.793774   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.793844   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.794094   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:35.293808   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.293879   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.294203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:35.294261   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:35.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.794103   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.294141   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.294296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.794385   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.794791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.293721   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.293800   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.294132   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.794036   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.794297   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:37.794344   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:38.294080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.294155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.294482   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:38.794270   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.794347   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.794663   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.293411   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.793476   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.793548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.793865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:40.293455   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.293907   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:40.293963   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:40.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.293444   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.293898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.793891   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.793960   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:42.794326   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:43.294061   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.294133   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.294467   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:43.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.794316   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.294331   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.294411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.294778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.793422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:45.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.293631   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:45.293977   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:45.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.793835   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.293534   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.293612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.294003   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.793541   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.793611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.793878   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:47.293767   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.293837   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.294173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:47.294229   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:47.794221   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.293486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.293760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.293446   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.293944   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.793512   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:49.793918   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:50.293594   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.293685   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.294016   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:50.793739   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.293812   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.293881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.294164   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.793945   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.794024   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.794370   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:51.794425   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:52.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.294180   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.294514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:52.794387   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.794468   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.794736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.793588   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.793680   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.794035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:54.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:54.293865   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:54.793520   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.793596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.793859   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:56.293555   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.293632   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:56.294027   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:56.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.293744   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.293822   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.794034   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.794429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:58.294164   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.294240   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.294551   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:58.294605   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:58.794324   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.794395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.794640   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.293351   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.293426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.293726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.793529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:00.301671   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.301760   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.302092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:00.302138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:00.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.293581   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.293683   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.294068   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.293633   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.293968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.793760   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.793866   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.794174   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:02.794228   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:03.293986   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.294063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.296865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1202 19:24:03.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.793994   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.293692   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.293763   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.793833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:05.293536   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.293614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:05.294030   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:05.793675   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.794044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.293762   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.293838   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.794391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:07.294030   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.294116   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.298234   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1202 19:24:07.301805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:07.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.794025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:08.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:24:08.293509   40272 node_ready.go:38] duration metric: took 6m0.000285031s for node "functional-374330" to be "Ready" ...
	I1202 19:24:08.296878   40272 out.go:203] 
	W1202 19:24:08.299748   40272 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:24:08.299768   40272 out.go:285] * 
	* 
	W1202 19:24:08.301915   40272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:24:08.304698   40272 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-374330 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.562316884s for "functional-374330" cluster.
I1202 19:24:08.960641    4470 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (350.746199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 logs -n 25: (1.033643882s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image save kicbase/echo-server:functional-535807 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image rm kicbase/echo-server:functional-535807 --alsologtostderr                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image save --daemon kicbase/echo-server:functional-535807 --alsologtostderr                                                             │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/test/nested/copy/4470/hosts                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/4470.pem                                                                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/4470.pem                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/44702.pem                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/44702.pem                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format short --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh pgrep buildkitd                                                                                                                     │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ image          │ functional-535807 image ls --format yaml --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                                      │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                               │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:18:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:18:02.458749   40272 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:18:02.458868   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.458880   40272 out.go:374] Setting ErrFile to fd 2...
	I1202 19:18:02.458886   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.459160   40272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:18:02.459549   40272 out.go:368] Setting JSON to false
	I1202 19:18:02.460340   40272 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3621,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:18:02.460405   40272 start.go:143] virtualization:  
	I1202 19:18:02.464020   40272 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:18:02.467892   40272 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:18:02.467969   40272 notify.go:221] Checking for updates...
	I1202 19:18:02.474021   40272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:18:02.477064   40272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:02.480130   40272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:18:02.483164   40272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:18:02.486142   40272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:18:02.489587   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:02.489732   40272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:18:02.527318   40272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:18:02.527492   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.584790   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.575369586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.584902   40272 docker.go:319] overlay module found
	I1202 19:18:02.588038   40272 out.go:179] * Using the docker driver based on existing profile
	I1202 19:18:02.590861   40272 start.go:309] selected driver: docker
	I1202 19:18:02.590885   40272 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.591008   40272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:18:02.591102   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.644457   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.635623623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.644867   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:02.644933   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:02.644976   40272 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.648222   40272 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:18:02.651050   40272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:18:02.654072   40272 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:18:02.657154   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:02.657223   40272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:18:02.676274   40272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:18:02.676298   40272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:18:02.730421   40272 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:18:02.934277   40272 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:18:02.934463   40272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:18:02.934535   40272 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934623   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:18:02.934634   40272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.203µs
	I1202 19:18:02.934648   40272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:18:02.934660   40272 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934690   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:18:02.934695   40272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.324µs
	I1202 19:18:02.934701   40272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934707   40272 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:18:02.934711   40272 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934738   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:18:02.934736   40272 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934743   40272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.525µs
	I1202 19:18:02.934750   40272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934759   40272 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934774   40272 start.go:364] duration metric: took 25.468µs to acquireMachinesLock for "functional-374330"
	I1202 19:18:02.934787   40272 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:18:02.934789   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:18:02.934792   40272 fix.go:54] fixHost starting: 
	I1202 19:18:02.934794   40272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 35.864µs
	I1202 19:18:02.934800   40272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934809   40272 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934834   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:18:02.934845   40272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 19:18:02.934851   40272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934859   40272 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934885   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:18:02.934890   40272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.983µs
	I1202 19:18:02.934895   40272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:18:02.934913   40272 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934941   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:18:02.934946   40272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.707µs
	I1202 19:18:02.934951   40272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:18:02.934960   40272 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934985   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:18:02.934990   40272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.646µs
	I1202 19:18:02.934995   40272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:18:02.935015   40272 cache.go:87] Successfully saved all images to host disk.
	I1202 19:18:02.935074   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:02.953213   40272 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:18:02.953249   40272 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:18:02.956557   40272 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:18:02.956597   40272 machine.go:94] provisionDockerMachine start ...
	I1202 19:18:02.956677   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:02.973977   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:02.974301   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:02.974316   40272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:18:03.125393   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.125419   40272 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:18:03.125485   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.143103   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.143432   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.143449   40272 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:18:03.303153   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.303231   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.322823   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.323149   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.323170   40272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:18:03.473999   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:18:03.474027   40272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:18:03.474048   40272 ubuntu.go:190] setting up certificates
	I1202 19:18:03.474072   40272 provision.go:84] configureAuth start
	I1202 19:18:03.474137   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:03.492443   40272 provision.go:143] copyHostCerts
	I1202 19:18:03.492497   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492535   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:18:03.492553   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492631   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:18:03.492733   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492755   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:18:03.492763   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492791   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:18:03.492852   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492873   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:18:03.492880   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492905   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:18:03.492966   40272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:18:03.672249   40272 provision.go:177] copyRemoteCerts
	I1202 19:18:03.672315   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:18:03.672360   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.690216   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:03.793601   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:18:03.793730   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:18:03.811690   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:18:03.811788   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:18:03.829853   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:18:03.829937   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:18:03.847063   40272 provision.go:87] duration metric: took 372.963339ms to configureAuth
	I1202 19:18:03.847135   40272 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:18:03.847323   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:03.847434   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.865504   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.865829   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.865845   40272 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:18:04.201120   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:18:04.201145   40272 machine.go:97] duration metric: took 1.244539118s to provisionDockerMachine
	I1202 19:18:04.201156   40272 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:18:04.201184   40272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:18:04.201288   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:18:04.201334   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.219464   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.321684   40272 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:18:04.325089   40272 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 19:18:04.325149   40272 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 19:18:04.325168   40272 command_runner.go:130] > VERSION_ID="12"
	I1202 19:18:04.325186   40272 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 19:18:04.325207   40272 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 19:18:04.325237   40272 command_runner.go:130] > ID=debian
	I1202 19:18:04.325255   40272 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 19:18:04.325286   40272 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 19:18:04.325319   40272 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 19:18:04.325987   40272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:18:04.326040   40272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:18:04.326062   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:18:04.326146   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:18:04.326256   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:18:04.326282   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:18:04.326394   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:18:04.326431   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> /etc/test/nested/copy/4470/hosts
	I1202 19:18:04.326515   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:18:04.334852   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:04.354617   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:18:04.371951   40272 start.go:296] duration metric: took 170.764596ms for postStartSetup
	I1202 19:18:04.372028   40272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:18:04.372100   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.388603   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.485826   40272 command_runner.go:130] > 12%
	I1202 19:18:04.486229   40272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:18:04.490474   40272 command_runner.go:130] > 172G
	I1202 19:18:04.490820   40272 fix.go:56] duration metric: took 1.556023913s for fixHost
	I1202 19:18:04.490841   40272 start.go:83] releasing machines lock for "functional-374330", held for 1.55605912s
	I1202 19:18:04.490913   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:04.507171   40272 ssh_runner.go:195] Run: cat /version.json
	I1202 19:18:04.507212   40272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:18:04.507223   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.507284   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.524406   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.524835   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.718816   40272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 19:18:04.718877   40272 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 19:18:04.719015   40272 ssh_runner.go:195] Run: systemctl --version
	I1202 19:18:04.724818   40272 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 19:18:04.724852   40272 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 19:18:04.725306   40272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:18:04.761633   40272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 19:18:04.765941   40272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 19:18:04.765984   40272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:18:04.766036   40272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:18:04.775671   40272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:18:04.775697   40272 start.go:496] detecting cgroup driver to use...
	I1202 19:18:04.775733   40272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:18:04.775798   40272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:18:04.790690   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:18:04.805178   40272 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:18:04.805246   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:18:04.821173   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:18:04.835737   40272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:18:04.950984   40272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:18:05.087151   40272 docker.go:234] disabling docker service ...
	I1202 19:18:05.087235   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:18:05.103857   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:18:05.118486   40272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:18:05.244193   40272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:18:05.357860   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:18:05.370494   40272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:18:05.383221   40272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 19:18:05.384408   40272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:18:05.384504   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.393298   40272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:18:05.393384   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.402265   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.411107   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.420227   40272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:18:05.428585   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.437313   40272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.445677   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.454485   40272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:18:05.461070   40272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 19:18:05.462061   40272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:18:05.469806   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:05.580364   40272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:18:05.753810   40272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:18:05.753880   40272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:18:05.759122   40272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 19:18:05.759148   40272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 19:18:05.759155   40272 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 19:18:05.759163   40272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:05.759168   40272 command_runner.go:130] > Access: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759176   40272 command_runner.go:130] > Modify: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759183   40272 command_runner.go:130] > Change: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759187   40272 command_runner.go:130] >  Birth: -
	I1202 19:18:05.759949   40272 start.go:564] Will wait 60s for crictl version
	I1202 19:18:05.760004   40272 ssh_runner.go:195] Run: which crictl
	I1202 19:18:05.764137   40272 command_runner.go:130] > /usr/local/bin/crictl
	I1202 19:18:05.765127   40272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:18:05.790594   40272 command_runner.go:130] > Version:  0.1.0
	I1202 19:18:05.790618   40272 command_runner.go:130] > RuntimeName:  cri-o
	I1202 19:18:05.790833   40272 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 19:18:05.791045   40272 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 19:18:05.793417   40272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:18:05.793500   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.827591   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.827617   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.827624   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.827633   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.827640   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.827654   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.827661   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.827671   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.827679   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.827682   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.827686   40272 command_runner.go:130] >      static
	I1202 19:18:05.827702   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.827705   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.827713   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.827719   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.827727   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.827733   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.827740   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.827750   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.827762   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.829485   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.856217   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.856241   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.856248   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.856254   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.856260   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.856264   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.856268   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.856272   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.856277   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.856281   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.856285   40272 command_runner.go:130] >      static
	I1202 19:18:05.856288   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.856292   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.856297   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.856300   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.856307   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.856311   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.856315   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.856333   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.856342   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.862922   40272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:18:05.865574   40272 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:18:05.881617   40272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:18:05.885365   40272 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 19:18:05.885465   40272 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:18:05.885585   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:05.885631   40272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:18:05.915386   40272 command_runner.go:130] > {
	I1202 19:18:05.915407   40272 command_runner.go:130] >   "images":  [
	I1202 19:18:05.915412   40272 command_runner.go:130] >     {
	I1202 19:18:05.915425   40272 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 19:18:05.915430   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915436   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 19:18:05.915440   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915443   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915458   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 19:18:05.915465   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915469   40272 command_runner.go:130] >       "size":  "29035622",
	I1202 19:18:05.915474   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915478   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915484   40272 command_runner.go:130] >     },
	I1202 19:18:05.915487   40272 command_runner.go:130] >     {
	I1202 19:18:05.915494   40272 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 19:18:05.915501   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915507   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 19:18:05.915511   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915523   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915531   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 19:18:05.915535   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915542   40272 command_runner.go:130] >       "size":  "74488375",
	I1202 19:18:05.915547   40272 command_runner.go:130] >       "username":  "nonroot",
	I1202 19:18:05.915550   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915553   40272 command_runner.go:130] >     },
	I1202 19:18:05.915562   40272 command_runner.go:130] >     {
	I1202 19:18:05.915572   40272 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 19:18:05.915585   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915590   40272 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 19:18:05.915593   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915597   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915618   40272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 19:18:05.915626   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915630   40272 command_runner.go:130] >       "size":  "60854229",
	I1202 19:18:05.915634   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915637   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915641   40272 command_runner.go:130] >       },
	I1202 19:18:05.915645   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915652   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915661   40272 command_runner.go:130] >     },
	I1202 19:18:05.915666   40272 command_runner.go:130] >     {
	I1202 19:18:05.915681   40272 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 19:18:05.915686   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915691   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 19:18:05.915697   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915702   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915710   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 19:18:05.915713   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915718   40272 command_runner.go:130] >       "size":  "84947242",
	I1202 19:18:05.915721   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915725   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915728   40272 command_runner.go:130] >       },
	I1202 19:18:05.915736   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915743   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915746   40272 command_runner.go:130] >     },
	I1202 19:18:05.915750   40272 command_runner.go:130] >     {
	I1202 19:18:05.915756   40272 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 19:18:05.915762   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915771   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 19:18:05.915778   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915782   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915790   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 19:18:05.915797   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915805   40272 command_runner.go:130] >       "size":  "72167568",
	I1202 19:18:05.915809   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915813   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915816   40272 command_runner.go:130] >       },
	I1202 19:18:05.915820   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915824   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915828   40272 command_runner.go:130] >     },
	I1202 19:18:05.915831   40272 command_runner.go:130] >     {
	I1202 19:18:05.915841   40272 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 19:18:05.915852   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915858   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 19:18:05.915861   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915866   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915880   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 19:18:05.915883   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915887   40272 command_runner.go:130] >       "size":  "74105124",
	I1202 19:18:05.915891   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915896   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915902   40272 command_runner.go:130] >     },
	I1202 19:18:05.915906   40272 command_runner.go:130] >     {
	I1202 19:18:05.915912   40272 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 19:18:05.915917   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915925   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 19:18:05.915930   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915934   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915943   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 19:18:05.915949   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915953   40272 command_runner.go:130] >       "size":  "49819792",
	I1202 19:18:05.915961   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915968   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915972   40272 command_runner.go:130] >       },
	I1202 19:18:05.915976   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915982   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915988   40272 command_runner.go:130] >     },
	I1202 19:18:05.915992   40272 command_runner.go:130] >     {
	I1202 19:18:05.915999   40272 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 19:18:05.916003   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.916010   40272 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.916014   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916018   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.916027   40272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 19:18:05.916043   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916046   40272 command_runner.go:130] >       "size":  "517328",
	I1202 19:18:05.916049   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.916054   40272 command_runner.go:130] >         "value":  "65535"
	I1202 19:18:05.916064   40272 command_runner.go:130] >       },
	I1202 19:18:05.916068   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.916072   40272 command_runner.go:130] >       "pinned":  true
	I1202 19:18:05.916075   40272 command_runner.go:130] >     }
	I1202 19:18:05.916078   40272 command_runner.go:130] >   ]
	I1202 19:18:05.916081   40272 command_runner.go:130] > }
	I1202 19:18:05.916221   40272 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:18:05.916234   40272 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:18:05.916241   40272 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:18:05.916331   40272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:18:05.916421   40272 ssh_runner.go:195] Run: crio config
	I1202 19:18:05.964092   40272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 19:18:05.964119   40272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 19:18:05.964127   40272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 19:18:05.964130   40272 command_runner.go:130] > #
	I1202 19:18:05.964138   40272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 19:18:05.964149   40272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 19:18:05.964156   40272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 19:18:05.964166   40272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 19:18:05.964176   40272 command_runner.go:130] > # reload'.
	I1202 19:18:05.964182   40272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 19:18:05.964189   40272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 19:18:05.964197   40272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 19:18:05.964204   40272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 19:18:05.964210   40272 command_runner.go:130] > [crio]
	I1202 19:18:05.964216   40272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 19:18:05.964223   40272 command_runner.go:130] > # containers images, in this directory.
	I1202 19:18:05.964661   40272 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 19:18:05.964681   40272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 19:18:05.965195   40272 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 19:18:05.965213   40272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 19:18:05.965585   40272 command_runner.go:130] > # imagestore = ""
	I1202 19:18:05.965601   40272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 19:18:05.965614   40272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 19:18:05.966162   40272 command_runner.go:130] > # storage_driver = "overlay"
	I1202 19:18:05.966179   40272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 19:18:05.966186   40272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 19:18:05.966362   40272 command_runner.go:130] > # storage_option = [
	I1202 19:18:05.966573   40272 command_runner.go:130] > # ]
	I1202 19:18:05.966591   40272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 19:18:05.966598   40272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 19:18:05.966880   40272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 19:18:05.966894   40272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 19:18:05.966902   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 19:18:05.966914   40272 command_runner.go:130] > # always happen on a node reboot
	I1202 19:18:05.967066   40272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 19:18:05.967095   40272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 19:18:05.967102   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 19:18:05.967107   40272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 19:18:05.967213   40272 command_runner.go:130] > # version_file_persist = ""
	I1202 19:18:05.967225   40272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 19:18:05.967234   40272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 19:18:05.967423   40272 command_runner.go:130] > # internal_wipe = true
	I1202 19:18:05.967436   40272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 19:18:05.967449   40272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 19:18:05.967580   40272 command_runner.go:130] > # internal_repair = true
	I1202 19:18:05.967590   40272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 19:18:05.967596   40272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 19:18:05.967602   40272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 19:18:05.967753   40272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 19:18:05.967764   40272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 19:18:05.967767   40272 command_runner.go:130] > [crio.api]
	I1202 19:18:05.967773   40272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 19:18:05.967953   40272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 19:18:05.967969   40272 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 19:18:05.968134   40272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 19:18:05.968145   40272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 19:18:05.968169   40272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 19:18:05.968297   40272 command_runner.go:130] > # stream_port = "0"
	I1202 19:18:05.968307   40272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 19:18:05.968473   40272 command_runner.go:130] > # stream_enable_tls = false
	I1202 19:18:05.968483   40272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 19:18:05.968653   40272 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 19:18:05.968663   40272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 19:18:05.968669   40272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968775   40272 command_runner.go:130] > # stream_tls_cert = ""
	I1202 19:18:05.968785   40272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 19:18:05.968792   40272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968905   40272 command_runner.go:130] > # stream_tls_key = ""
	I1202 19:18:05.968915   40272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 19:18:05.968922   40272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 19:18:05.968926   40272 command_runner.go:130] > # automatically pick up the changes.
	I1202 19:18:05.969055   40272 command_runner.go:130] > # stream_tls_ca = ""
	I1202 19:18:05.969084   40272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969257   40272 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 19:18:05.969270   40272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969439   40272 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 19:18:05.969511   40272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 19:18:05.969528   40272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 19:18:05.969532   40272 command_runner.go:130] > [crio.runtime]
	I1202 19:18:05.969539   40272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 19:18:05.969544   40272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 19:18:05.969548   40272 command_runner.go:130] > # "nofile=1024:2048"
	I1202 19:18:05.969554   40272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 19:18:05.969676   40272 command_runner.go:130] > # default_ulimits = [
	I1202 19:18:05.969684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.969691   40272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 19:18:05.969900   40272 command_runner.go:130] > # no_pivot = false
	I1202 19:18:05.969912   40272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 19:18:05.969920   40272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 19:18:05.970109   40272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 19:18:05.970119   40272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 19:18:05.970124   40272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 19:18:05.970131   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970227   40272 command_runner.go:130] > # conmon = ""
	I1202 19:18:05.970236   40272 command_runner.go:130] > # Cgroup setting for conmon
	I1202 19:18:05.970244   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 19:18:05.970379   40272 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 19:18:05.970389   40272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 19:18:05.970395   40272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 19:18:05.970403   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970521   40272 command_runner.go:130] > # conmon_env = [
	I1202 19:18:05.970671   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970681   40272 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 19:18:05.970687   40272 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 19:18:05.970693   40272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 19:18:05.970697   40272 command_runner.go:130] > # default_env = [
	I1202 19:18:05.970827   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970837   40272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 19:18:05.970846   40272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 19:18:05.970995   40272 command_runner.go:130] > # selinux = false
	I1202 19:18:05.971005   40272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 19:18:05.971014   40272 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 19:18:05.971019   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971123   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.971133   40272 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 19:18:05.971140   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971283   40272 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 19:18:05.971297   40272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 19:18:05.971349   40272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 19:18:05.971394   40272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 19:18:05.971420   40272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 19:18:05.971426   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971532   40272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 19:18:05.971542   40272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 19:18:05.971554   40272 command_runner.go:130] > # the cgroup blockio controller.
	I1202 19:18:05.971691   40272 command_runner.go:130] > # blockio_config_file = ""
	I1202 19:18:05.971702   40272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 19:18:05.971706   40272 command_runner.go:130] > # blockio parameters.
	I1202 19:18:05.971888   40272 command_runner.go:130] > # blockio_reload = false
	I1202 19:18:05.971899   40272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 19:18:05.971911   40272 command_runner.go:130] > # irqbalance daemon.
	I1202 19:18:05.972089   40272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 19:18:05.972099   40272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 19:18:05.972107   40272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 19:18:05.972118   40272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 19:18:05.972238   40272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 19:18:05.972249   40272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 19:18:05.972255   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.972373   40272 command_runner.go:130] > # rdt_config_file = ""
	I1202 19:18:05.972382   40272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 19:18:05.972510   40272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 19:18:05.972521   40272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 19:18:05.972668   40272 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 19:18:05.972679   40272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 19:18:05.972686   40272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 19:18:05.972689   40272 command_runner.go:130] > # will be added.
	I1202 19:18:05.972804   40272 command_runner.go:130] > # default_capabilities = [
	I1202 19:18:05.972909   40272 command_runner.go:130] > # 	"CHOWN",
	I1202 19:18:05.973035   40272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 19:18:05.973186   40272 command_runner.go:130] > # 	"FSETID",
	I1202 19:18:05.973194   40272 command_runner.go:130] > # 	"FOWNER",
	I1202 19:18:05.973322   40272 command_runner.go:130] > # 	"SETGID",
	I1202 19:18:05.973468   40272 command_runner.go:130] > # 	"SETUID",
	I1202 19:18:05.973500   40272 command_runner.go:130] > # 	"SETPCAP",
	I1202 19:18:05.973632   40272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 19:18:05.973847   40272 command_runner.go:130] > # 	"KILL",
	I1202 19:18:05.973855   40272 command_runner.go:130] > # ]
	I1202 19:18:05.973864   40272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 19:18:05.973870   40272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 19:18:05.974039   40272 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 19:18:05.974052   40272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 19:18:05.974059   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974062   40272 command_runner.go:130] > default_sysctls = [
	I1202 19:18:05.974148   40272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 19:18:05.974179   40272 command_runner.go:130] > ]
	I1202 19:18:05.974185   40272 command_runner.go:130] > # List of devices on the host that a
	I1202 19:18:05.974297   40272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 19:18:05.974459   40272 command_runner.go:130] > # allowed_devices = [
	I1202 19:18:05.974492   40272 command_runner.go:130] > # 	"/dev/fuse",
	I1202 19:18:05.974497   40272 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 19:18:05.974500   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974505   40272 command_runner.go:130] > # List of additional devices. specified as
	I1202 19:18:05.974517   40272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 19:18:05.974706   40272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 19:18:05.974717   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974722   40272 command_runner.go:130] > # additional_devices = [
	I1202 19:18:05.974730   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974735   40272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 19:18:05.974870   40272 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 19:18:05.975061   40272 command_runner.go:130] > # 	"/etc/cdi",
	I1202 19:18:05.975069   40272 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 19:18:05.975204   40272 command_runner.go:130] > # ]
	I1202 19:18:05.975337   40272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 19:18:05.975610   40272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 19:18:05.975708   40272 command_runner.go:130] > # Defaults to false.
	I1202 19:18:05.975730   40272 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 19:18:05.975766   40272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 19:18:05.975927   40272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 19:18:05.976135   40272 command_runner.go:130] > # hooks_dir = [
	I1202 19:18:05.976173   40272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 19:18:05.976199   40272 command_runner.go:130] > # ]
	I1202 19:18:05.976222   40272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 19:18:05.976257   40272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 19:18:05.976344   40272 command_runner.go:130] > # its default mounts from the following two files:
	I1202 19:18:05.976363   40272 command_runner.go:130] > #
	I1202 19:18:05.976438   40272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 19:18:05.976465   40272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 19:18:05.976485   40272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 19:18:05.976561   40272 command_runner.go:130] > #
	I1202 19:18:05.976637   40272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 19:18:05.976658   40272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 19:18:05.976681   40272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 19:18:05.976711   40272 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 19:18:05.976797   40272 command_runner.go:130] > #
	I1202 19:18:05.976852   40272 command_runner.go:130] > # default_mounts_file = ""
	I1202 19:18:05.976886   40272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 19:18:05.976912   40272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 19:18:05.976930   40272 command_runner.go:130] > # pids_limit = -1
	I1202 19:18:05.977014   40272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 19:18:05.977040   40272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 19:18:05.977112   40272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 19:18:05.977136   40272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 19:18:05.977153   40272 command_runner.go:130] > # log_size_max = -1
	I1202 19:18:05.977240   40272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 19:18:05.977264   40272 command_runner.go:130] > # log_to_journald = false
	I1202 19:18:05.977344   40272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 19:18:05.977370   40272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 19:18:05.977390   40272 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 19:18:05.977478   40272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 19:18:05.977500   40272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 19:18:05.977570   40272 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 19:18:05.977596   40272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 19:18:05.977614   40272 command_runner.go:130] > # read_only = false
	I1202 19:18:05.977722   40272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 19:18:05.977797   40272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 19:18:05.977817   40272 command_runner.go:130] > # live configuration reload.
	I1202 19:18:05.977836   40272 command_runner.go:130] > # log_level = "info"
	I1202 19:18:05.977872   40272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 19:18:05.977956   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.978011   40272 command_runner.go:130] > # log_filter = ""
	I1202 19:18:05.978051   40272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978073   40272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 19:18:05.978093   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978128   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978214   40272 command_runner.go:130] > # uid_mappings = ""
	I1202 19:18:05.978236   40272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978257   40272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 19:18:05.978338   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978377   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978397   40272 command_runner.go:130] > # gid_mappings = ""
	I1202 19:18:05.978483   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 19:18:05.978556   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978583   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978606   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978700   40272 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 19:18:05.978728   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 19:18:05.978805   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978827   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978909   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978941   40272 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 19:18:05.979022   40272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 19:18:05.979049   40272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 19:18:05.979139   40272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 19:18:05.979164   40272 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 19:18:05.979239   40272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 19:18:05.979264   40272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 19:18:05.979291   40272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 19:18:05.979376   40272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 19:18:05.979411   40272 command_runner.go:130] > # drop_infra_ctr = true
	I1202 19:18:05.979493   40272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 19:18:05.979517   40272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 19:18:05.979541   40272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 19:18:05.979625   40272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 19:18:05.979649   40272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 19:18:05.979723   40272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 19:18:05.979744   40272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 19:18:05.979763   40272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 19:18:05.979845   40272 command_runner.go:130] > # shared_cpuset = ""
	I1202 19:18:05.979867   40272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 19:18:05.979937   40272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 19:18:05.979961   40272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 19:18:05.979983   40272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 19:18:05.980069   40272 command_runner.go:130] > # pinns_path = ""
	I1202 19:18:05.980091   40272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 19:18:05.980113   40272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 19:18:05.980205   40272 command_runner.go:130] > # enable_criu_support = true
	I1202 19:18:05.980225   40272 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 19:18:05.980246   40272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 19:18:05.980337   40272 command_runner.go:130] > # enable_pod_events = false
	I1202 19:18:05.980364   40272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 19:18:05.980435   40272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 19:18:05.980456   40272 command_runner.go:130] > # default_runtime = "crun"
	I1202 19:18:05.980476   40272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 19:18:05.980567   40272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 19:18:05.980641   40272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 19:18:05.980666   40272 command_runner.go:130] > # creation as a file is not desired either.
	I1202 19:18:05.980689   40272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 19:18:05.980782   40272 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 19:18:05.980807   40272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 19:18:05.980885   40272 command_runner.go:130] > # ]
	I1202 19:18:05.980907   40272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 19:18:05.980989   40272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 19:18:05.981060   40272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 19:18:05.981080   40272 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 19:18:05.981155   40272 command_runner.go:130] > #
	I1202 19:18:05.981180   40272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 19:18:05.981237   40272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 19:18:05.981273   40272 command_runner.go:130] > # runtime_type = "oci"
	I1202 19:18:05.981291   40272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 19:18:05.981311   40272 command_runner.go:130] > # inherit_default_runtime = false
	I1202 19:18:05.981423   40272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 19:18:05.981442   40272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 19:18:05.981461   40272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 19:18:05.981479   40272 command_runner.go:130] > # monitor_env = []
	I1202 19:18:05.981507   40272 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 19:18:05.981530   40272 command_runner.go:130] > # allowed_annotations = []
	I1202 19:18:05.981553   40272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 19:18:05.981571   40272 command_runner.go:130] > # no_sync_log = false
	I1202 19:18:05.981591   40272 command_runner.go:130] > # default_annotations = {}
	I1202 19:18:05.981620   40272 command_runner.go:130] > # stream_websockets = false
	I1202 19:18:05.981644   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.981733   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.981765   40272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 19:18:05.981785   40272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 19:18:05.981807   40272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 19:18:05.981914   40272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 19:18:05.981934   40272 command_runner.go:130] > #   in $PATH.
	I1202 19:18:05.981954   40272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 19:18:05.981989   40272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 19:18:05.982017   40272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 19:18:05.982034   40272 command_runner.go:130] > #   state.
	I1202 19:18:05.982057   40272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 19:18:05.982098   40272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 19:18:05.982128   40272 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 19:18:05.982148   40272 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 19:18:05.982168   40272 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 19:18:05.982199   40272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 19:18:05.982235   40272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 19:18:05.982255   40272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 19:18:05.982277   40272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 19:18:05.982307   40272 command_runner.go:130] > #   The currently recognized values are:
	I1202 19:18:05.982329   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 19:18:05.983678   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 19:18:05.983703   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 19:18:05.983795   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 19:18:05.983829   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 19:18:05.983905   40272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 19:18:05.983938   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 19:18:05.983958   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 19:18:05.983978   40272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 19:18:05.984011   40272 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 19:18:05.984040   40272 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 19:18:05.984061   40272 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 19:18:05.984082   40272 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 19:18:05.984114   40272 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 19:18:05.984143   40272 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 19:18:05.984168   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 19:18:05.984191   40272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 19:18:05.984220   40272 command_runner.go:130] > #   deprecated option "conmon".
	I1202 19:18:05.984244   40272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 19:18:05.984265   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 19:18:05.984298   40272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 19:18:05.984320   40272 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 19:18:05.984343   40272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 19:18:05.984373   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 19:18:05.984413   40272 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 19:18:05.984432   40272 command_runner.go:130] > #   conmon-rs by using:
	I1202 19:18:05.984470   40272 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 19:18:05.984495   40272 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 19:18:05.984515   40272 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 19:18:05.984549   40272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 19:18:05.984571   40272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 19:18:05.984595   40272 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 19:18:05.984630   40272 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 19:18:05.984653   40272 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 19:18:05.984677   40272 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 19:18:05.984716   40272 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 19:18:05.984737   40272 command_runner.go:130] > #   when a machine crash happens.
	I1202 19:18:05.984765   40272 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 19:18:05.984801   40272 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 19:18:05.984825   40272 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 19:18:05.984846   40272 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 19:18:05.984877   40272 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 19:18:05.984902   40272 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 19:18:05.984921   40272 command_runner.go:130] > #
	I1202 19:18:05.984958   40272 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 19:18:05.984976   40272 command_runner.go:130] > #
	I1202 19:18:05.984996   40272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 19:18:05.985026   40272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 19:18:05.985052   40272 command_runner.go:130] > #
	I1202 19:18:05.985075   40272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 19:18:05.985099   40272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 19:18:05.985125   40272 command_runner.go:130] > #
	I1202 19:18:05.985149   40272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 19:18:05.985169   40272 command_runner.go:130] > # feature.
	I1202 19:18:05.985199   40272 command_runner.go:130] > #
	I1202 19:18:05.985224   40272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 19:18:05.985244   40272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 19:18:05.985274   40272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 19:18:05.985304   40272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 19:18:05.985329   40272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 19:18:05.985349   40272 command_runner.go:130] > #
	I1202 19:18:05.985381   40272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 19:18:05.985404   40272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 19:18:05.985422   40272 command_runner.go:130] > #
	I1202 19:18:05.985454   40272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 19:18:05.985482   40272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 19:18:05.985497   40272 command_runner.go:130] > #
	I1202 19:18:05.985518   40272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 19:18:05.985550   40272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 19:18:05.985582   40272 command_runner.go:130] > # limitation.
	I1202 19:18:05.985602   40272 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 19:18:05.985622   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 19:18:05.985670   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985689   40272 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 19:18:05.985704   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985709   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985725   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985731   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985741   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985745   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985749   40272 command_runner.go:130] > allowed_annotations = [
	I1202 19:18:05.985754   40272 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 19:18:05.985759   40272 command_runner.go:130] > ]
	I1202 19:18:05.985765   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985769   40272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 19:18:05.985782   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 19:18:05.985786   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985795   40272 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 19:18:05.985801   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985810   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985821   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985829   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985833   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985837   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985845   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985852   40272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 19:18:05.985860   40272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 19:18:05.985867   40272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 19:18:05.985881   40272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 19:18:05.985892   40272 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 19:18:05.985905   40272 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 19:18:05.985915   40272 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 19:18:05.985926   40272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 19:18:05.985936   40272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 19:18:05.985947   40272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 19:18:05.985953   40272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 19:18:05.985964   40272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 19:18:05.985968   40272 command_runner.go:130] > # Example:
	I1202 19:18:05.985975   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 19:18:05.985980   40272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 19:18:05.985987   40272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 19:18:05.985993   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 19:18:05.985996   40272 command_runner.go:130] > # cpuset = "0-1"
	I1202 19:18:05.986000   40272 command_runner.go:130] > # cpushares = "5"
	I1202 19:18:05.986007   40272 command_runner.go:130] > # cpuquota = "1000"
	I1202 19:18:05.986011   40272 command_runner.go:130] > # cpuperiod = "100000"
	I1202 19:18:05.986014   40272 command_runner.go:130] > # cpulimit = "35"
	I1202 19:18:05.986018   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.986025   40272 command_runner.go:130] > # The workload name is workload-type.
	I1202 19:18:05.986033   40272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 19:18:05.986041   40272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 19:18:05.986047   40272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 19:18:05.986057   40272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 19:18:05.986069   40272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 19:18:05.986075   40272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 19:18:05.986082   40272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 19:18:05.986086   40272 command_runner.go:130] > # Default value is set to true
	I1202 19:18:05.986096   40272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 19:18:05.986102   40272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 19:18:05.986107   40272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 19:18:05.986117   40272 command_runner.go:130] > # Default value is set to 'false'
	I1202 19:18:05.986121   40272 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 19:18:05.986127   40272 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 19:18:05.986137   40272 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 19:18:05.986142   40272 command_runner.go:130] > # timezone = ""
	I1202 19:18:05.986151   40272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 19:18:05.986154   40272 command_runner.go:130] > #
	I1202 19:18:05.986160   40272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 19:18:05.986171   40272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 19:18:05.986178   40272 command_runner.go:130] > [crio.image]
	I1202 19:18:05.986184   40272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 19:18:05.986189   40272 command_runner.go:130] > # default_transport = "docker://"
	I1202 19:18:05.986197   40272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 19:18:05.986205   40272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986212   40272 command_runner.go:130] > # global_auth_file = ""
	I1202 19:18:05.986217   40272 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 19:18:05.986223   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986230   40272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.986237   40272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 19:18:05.986243   40272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986248   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986255   40272 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 19:18:05.986260   40272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 19:18:05.986266   40272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 19:18:05.986275   40272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 19:18:05.986281   40272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 19:18:05.986291   40272 command_runner.go:130] > # pause_command = "/pause"
	I1202 19:18:05.986301   40272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 19:18:05.986309   40272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 19:18:05.986319   40272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 19:18:05.986324   40272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 19:18:05.986331   40272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 19:18:05.986337   40272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 19:18:05.986343   40272 command_runner.go:130] > # pinned_images = [
	I1202 19:18:05.986346   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986352   40272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 19:18:05.986360   40272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 19:18:05.986367   40272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 19:18:05.986376   40272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 19:18:05.986381   40272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 19:18:05.986388   40272 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 19:18:05.986394   40272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 19:18:05.986401   40272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 19:18:05.986415   40272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 19:18:05.986422   40272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 19:18:05.986431   40272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 19:18:05.986436   40272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 19:18:05.986442   40272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 19:18:05.986452   40272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 19:18:05.986456   40272 command_runner.go:130] > # changing them here.
	I1202 19:18:05.986462   40272 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 19:18:05.986468   40272 command_runner.go:130] > # insecure_registries = [
	I1202 19:18:05.986472   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986478   40272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 19:18:05.986486   40272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 19:18:05.986490   40272 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 19:18:05.986495   40272 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 19:18:05.986499   40272 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 19:18:05.986505   40272 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 19:18:05.986518   40272 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 19:18:05.986525   40272 command_runner.go:130] > # auto_reload_registries = false
	I1202 19:18:05.986531   40272 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 19:18:05.986543   40272 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 19:18:05.986549   40272 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 19:18:05.986556   40272 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 19:18:05.986561   40272 command_runner.go:130] > # The mode of short name resolution.
	I1202 19:18:05.986568   40272 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 19:18:05.986578   40272 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 19:18:05.986583   40272 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 19:18:05.986588   40272 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 19:18:05.986593   40272 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 19:18:05.986602   40272 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 19:18:05.986606   40272 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 19:18:05.986612   40272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 19:18:05.986619   40272 command_runner.go:130] > # CNI plugins.
	I1202 19:18:05.986623   40272 command_runner.go:130] > [crio.network]
	I1202 19:18:05.986629   40272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 19:18:05.986637   40272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 19:18:05.986640   40272 command_runner.go:130] > # cni_default_network = ""
	I1202 19:18:05.986646   40272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 19:18:05.986655   40272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 19:18:05.986661   40272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 19:18:05.986664   40272 command_runner.go:130] > # plugin_dirs = [
	I1202 19:18:05.986668   40272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 19:18:05.986674   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986678   40272 command_runner.go:130] > # List of included pod metrics.
	I1202 19:18:05.986681   40272 command_runner.go:130] > # included_pod_metrics = [
	I1202 19:18:05.986684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986690   40272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 19:18:05.986696   40272 command_runner.go:130] > [crio.metrics]
	I1202 19:18:05.986701   40272 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 19:18:05.986705   40272 command_runner.go:130] > # enable_metrics = false
	I1202 19:18:05.986718   40272 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 19:18:05.986723   40272 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 19:18:05.986732   40272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 19:18:05.986738   40272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 19:18:05.986744   40272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 19:18:05.986748   40272 command_runner.go:130] > # metrics_collectors = [
	I1202 19:18:05.986753   40272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 19:18:05.986760   40272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 19:18:05.986764   40272 command_runner.go:130] > # 	"containers_oom_total",
	I1202 19:18:05.986768   40272 command_runner.go:130] > # 	"processes_defunct",
	I1202 19:18:05.986777   40272 command_runner.go:130] > # 	"operations_total",
	I1202 19:18:05.986782   40272 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 19:18:05.986787   40272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 19:18:05.986793   40272 command_runner.go:130] > # 	"operations_errors_total",
	I1202 19:18:05.986797   40272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 19:18:05.986802   40272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 19:18:05.986809   40272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 19:18:05.986814   40272 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 19:18:05.986819   40272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 19:18:05.986823   40272 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 19:18:05.986829   40272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 19:18:05.986836   40272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 19:18:05.986840   40272 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 19:18:05.986844   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986852   40272 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 19:18:05.986862   40272 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 19:18:05.986870   40272 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 19:18:05.986877   40272 command_runner.go:130] > # metrics_port = 9090
	I1202 19:18:05.986882   40272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 19:18:05.986886   40272 command_runner.go:130] > # metrics_socket = ""
	I1202 19:18:05.986893   40272 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 19:18:05.986899   40272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 19:18:05.986906   40272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 19:18:05.986918   40272 command_runner.go:130] > # certificate on any modification event.
	I1202 19:18:05.986933   40272 command_runner.go:130] > # metrics_cert = ""
	I1202 19:18:05.986939   40272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 19:18:05.986947   40272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 19:18:05.986950   40272 command_runner.go:130] > # metrics_key = ""
	I1202 19:18:05.986956   40272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 19:18:05.986962   40272 command_runner.go:130] > [crio.tracing]
	I1202 19:18:05.986967   40272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 19:18:05.986972   40272 command_runner.go:130] > # enable_tracing = false
	I1202 19:18:05.986979   40272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 19:18:05.986984   40272 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 19:18:05.986990   40272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 19:18:05.986997   40272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 19:18:05.987001   40272 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 19:18:05.987007   40272 command_runner.go:130] > [crio.nri]
	I1202 19:18:05.987011   40272 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 19:18:05.987015   40272 command_runner.go:130] > # enable_nri = true
	I1202 19:18:05.987019   40272 command_runner.go:130] > # NRI socket to listen on.
	I1202 19:18:05.987029   40272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 19:18:05.987033   40272 command_runner.go:130] > # NRI plugin directory to use.
	I1202 19:18:05.987037   40272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 19:18:05.987045   40272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 19:18:05.987050   40272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 19:18:05.987056   40272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 19:18:05.987116   40272 command_runner.go:130] > # nri_disable_connections = false
	I1202 19:18:05.987126   40272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 19:18:05.987130   40272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 19:18:05.987136   40272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 19:18:05.987142   40272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 19:18:05.987147   40272 command_runner.go:130] > # NRI default validator configuration.
	I1202 19:18:05.987157   40272 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 19:18:05.987166   40272 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 19:18:05.987170   40272 command_runner.go:130] > # can be restricted/rejected:
	I1202 19:18:05.987178   40272 command_runner.go:130] > # - OCI hook injection
	I1202 19:18:05.987186   40272 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 19:18:05.987191   40272 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 19:18:05.987196   40272 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 19:18:05.987203   40272 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 19:18:05.987209   40272 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 19:18:05.987216   40272 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 19:18:05.987225   40272 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 19:18:05.987230   40272 command_runner.go:130] > #
	I1202 19:18:05.987234   40272 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 19:18:05.987239   40272 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 19:18:05.987245   40272 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 19:18:05.987254   40272 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 19:18:05.987260   40272 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 19:18:05.987268   40272 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 19:18:05.987279   40272 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 19:18:05.987283   40272 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 19:18:05.987286   40272 command_runner.go:130] > # ]
	I1202 19:18:05.987291   40272 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 19:18:05.987299   40272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 19:18:05.987302   40272 command_runner.go:130] > [crio.stats]
	I1202 19:18:05.987308   40272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 19:18:05.987316   40272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 19:18:05.987320   40272 command_runner.go:130] > # stats_collection_period = 0
	I1202 19:18:05.987326   40272 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 19:18:05.987334   40272 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 19:18:05.987344   40272 command_runner.go:130] > # collection_period = 0
	I1202 19:18:05.987392   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941536561Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 19:18:05.987405   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941573139Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 19:18:05.987421   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941598771Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 19:18:05.987431   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941629007Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 19:18:05.987447   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.94184771Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.987460   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.942236436Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 19:18:05.987477   40272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 19:18:05.987606   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:05.987620   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:05.987644   40272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:18:05.987670   40272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:18:05.987799   40272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:18:05.987877   40272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:18:05.995250   40272 command_runner.go:130] > kubeadm
	I1202 19:18:05.995271   40272 command_runner.go:130] > kubectl
	I1202 19:18:05.995276   40272 command_runner.go:130] > kubelet
	I1202 19:18:05.995308   40272 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:18:05.995379   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:18:06.002605   40272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:18:06.015240   40272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:18:06.033933   40272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:18:06.047469   40272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:18:06.051453   40272 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 19:18:06.051580   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:06.161840   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:06.543709   40272 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:18:06.543774   40272 certs.go:195] generating shared ca certs ...
	I1202 19:18:06.543803   40272 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:06.543968   40272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:18:06.544037   40272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:18:06.544058   40272 certs.go:257] generating profile certs ...
	I1202 19:18:06.544203   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:18:06.544311   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:18:06.544381   40272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:18:06.544424   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:18:06.544458   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:18:06.544493   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:18:06.544537   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:18:06.544570   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:18:06.544599   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:18:06.544648   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:18:06.544683   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:18:06.544773   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:18:06.544828   40272 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:18:06.544854   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:18:06.544932   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:18:06.551062   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:18:06.551141   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:18:06.551220   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:06.551261   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.551291   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.551312   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.552213   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:18:06.569384   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:18:06.587883   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:18:06.609527   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:18:06.628039   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:18:06.644623   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:18:06.662478   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:18:06.679440   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:18:06.696330   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:18:06.713584   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:18:06.731033   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:18:06.747714   40272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:18:06.761265   40272 ssh_runner.go:195] Run: openssl version
	I1202 19:18:06.766652   40272 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 19:18:06.767017   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:18:06.774639   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.777834   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778051   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778107   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.818127   40272 command_runner.go:130] > b5213941
	I1202 19:18:06.818625   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:18:06.826391   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:18:06.834719   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838324   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838367   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838418   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.878978   40272 command_runner.go:130] > 51391683
	I1202 19:18:06.879420   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:18:06.887230   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:18:06.895470   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899261   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899287   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899335   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.940199   40272 command_runner.go:130] > 3ec20f2e
	I1202 19:18:06.940694   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:18:06.948359   40272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951793   40272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951816   40272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 19:18:06.951822   40272 command_runner.go:130] > Device: 259,1	Inode: 1315539     Links: 1
	I1202 19:18:06.951851   40272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:06.951865   40272 command_runner.go:130] > Access: 2025-12-02 19:13:58.595474405 +0000
	I1202 19:18:06.951871   40272 command_runner.go:130] > Modify: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951876   40272 command_runner.go:130] > Change: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951881   40272 command_runner.go:130] >  Birth: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951960   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:18:06.996850   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:06.997318   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:18:07.037433   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.037885   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:18:07.078161   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.078666   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:18:07.119364   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.119441   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:18:07.159628   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.160136   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:18:07.204176   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.204662   40272 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:07.204768   40272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:18:07.204851   40272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:18:07.233427   40272 cri.go:89] found id: ""
	I1202 19:18:07.233514   40272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:18:07.240330   40272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 19:18:07.240352   40272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 19:18:07.240359   40272 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 19:18:07.241346   40272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:18:07.241363   40272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:18:07.241437   40272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:18:07.248549   40272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:18:07.248941   40272 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249040   40272 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "functional-374330" cluster setting kubeconfig missing "functional-374330" context setting]
	I1202 19:18:07.249312   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.249749   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249896   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.250443   40272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:18:07.250467   40272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:18:07.250474   40272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:18:07.250478   40272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:18:07.250487   40272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:18:07.250526   40272 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:18:07.250793   40272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:18:07.258519   40272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:18:07.258557   40272 kubeadm.go:602] duration metric: took 17.188352ms to restartPrimaryControlPlane
	I1202 19:18:07.258569   40272 kubeadm.go:403] duration metric: took 53.913832ms to StartCluster
	I1202 19:18:07.258583   40272 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.258647   40272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.259281   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.259482   40272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:18:07.259876   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:07.259927   40272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:18:07.259993   40272 addons.go:70] Setting storage-provisioner=true in profile "functional-374330"
	I1202 19:18:07.260007   40272 addons.go:239] Setting addon storage-provisioner=true in "functional-374330"
	I1202 19:18:07.260034   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.260061   40272 addons.go:70] Setting default-storageclass=true in profile "functional-374330"
	I1202 19:18:07.260107   40272 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-374330"
	I1202 19:18:07.260433   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.260513   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.266365   40272 out.go:179] * Verifying Kubernetes components...
	I1202 19:18:07.269343   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:07.293348   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.293507   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.293796   40272 addons.go:239] Setting addon default-storageclass=true in "functional-374330"
	I1202 19:18:07.293827   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.294253   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.304761   40272 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:18:07.307700   40272 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.307724   40272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:18:07.307789   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.332842   40272 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:07.332860   40272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:18:07.332914   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.347890   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.373144   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.469482   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:07.472955   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.515784   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.293178   40272 node_ready.go:35] waiting up to 6m0s for node "functional-374330" to be "Ready" ...
	I1202 19:18:08.293301   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.293355   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.293568   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293595   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293615   40272 retry.go:31] will retry after 144.187129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293684   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293702   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293710   40272 retry.go:31] will retry after 132.365923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.427169   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.438559   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.510555   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513791   40272 retry.go:31] will retry after 461.570102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513742   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513825   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513833   40272 retry.go:31] will retry after 354.67857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.794133   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.794203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.868974   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.929070   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.932369   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.932402   40272 retry.go:31] will retry after 765.19043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.975575   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.036469   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.042296   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.042376   40272 retry.go:31] will retry after 433.124039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.293618   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.293713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:09.476440   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.538441   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.541412   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.541444   40272 retry.go:31] will retry after 747.346338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.698768   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:09.764666   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.764703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.764723   40272 retry.go:31] will retry after 541.76994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.793827   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.793965   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.794261   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:10.289986   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:10.293340   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.293732   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:10.293780   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:10.307063   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:10.373573   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.373608   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.373627   40272 retry.go:31] will retry after 1.037281057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388739   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.388813   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388864   40272 retry.go:31] will retry after 1.072570226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.794280   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.794348   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.794651   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.293375   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.293466   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.293739   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.411088   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:11.462503   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:11.470558   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.470603   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.470624   40272 retry.go:31] will retry after 2.459470693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530455   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.530510   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530529   40272 retry.go:31] will retry after 2.35440359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.794013   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.794477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:12.294194   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.294271   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:12.294648   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:12.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.793567   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.793595   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.793686   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.794006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.885433   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:13.930854   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:13.940303   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:13.943330   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:13.943359   40272 retry.go:31] will retry after 2.562469282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000907   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:14.000951   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000969   40272 retry.go:31] will retry after 3.172954134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.294316   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.294381   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:14.793366   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.793435   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.793778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:14.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:15.293495   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:15.793590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.793675   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.794004   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.293435   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.506093   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:16.576298   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:16.580372   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.580403   40272 retry.go:31] will retry after 6.193423377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.793925   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.794050   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:16.794410   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:17.174990   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:17.234065   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:17.234161   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.234184   40272 retry.go:31] will retry after 6.017051757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.293565   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.293640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:17.793940   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.794318   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.294120   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.294191   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.294497   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.794258   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.794341   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.794641   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:18.794693   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:19.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:19.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.793693   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.794032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.293712   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.793838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:21.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:21.293929   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:21.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.293417   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.774666   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:22.793983   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.794053   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.835259   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:22.835293   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:22.835313   40272 retry.go:31] will retry after 8.891499319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.251502   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:23.293920   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.293995   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.294305   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:23.294361   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:23.316803   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:23.325390   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.325420   40272 retry.go:31] will retry after 5.436174555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.794140   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.794209   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.794514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.294165   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.294234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.294532   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.794307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.794552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:25.294405   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.294476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.294786   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:25.294838   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:25.793518   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.793593   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.793954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.293881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.793441   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.793515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.793898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.293636   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.294038   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.793924   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.793994   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.794242   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:27.794290   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:28.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.294085   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.294398   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.762126   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:28.793717   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.794058   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.820417   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:28.820461   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:28.820480   40272 retry.go:31] will retry after 5.23527752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:29.294048   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.294387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:29.794183   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.794303   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.794634   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:29.794706   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:30.294267   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.294340   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.294624   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:30.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.793398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.793762   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.293841   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.727474   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:31.785329   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:31.788538   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.788571   40272 retry.go:31] will retry after 14.027342391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.793764   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.793834   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.794170   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:32.293926   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.293991   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.294245   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:32.294283   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:32.794305   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.794380   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.794731   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.293682   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.294006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:34.056328   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:34.114988   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:34.115034   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.115053   40272 retry.go:31] will retry after 20.825216377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.294372   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.294768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:34.294823   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:34.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.293815   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.293900   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.294151   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.793855   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.793935   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.794205   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.293483   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.793564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.793873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:36.793925   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:37.293668   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.293762   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.294075   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:37.793947   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.794293   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.294087   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.294335   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.794481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:38.794533   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:39.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.294563   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:39.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.794411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.794661   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.793560   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.793636   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:41.293642   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:41.294091   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:41.793737   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.793809   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.794119   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:42.294249   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.294351   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.295481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1202 19:18:42.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.794309   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.794549   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:43.294307   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.294779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:43.294833   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:43.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.793526   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.293539   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.293609   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.293775   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.294288   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.794074   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.794139   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:45.794427   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:45.816754   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:45.885215   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:45.888326   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:45.888364   40272 retry.go:31] will retry after 11.821193731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:46.293908   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.293987   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.294332   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:46.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.794188   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.794450   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.294325   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.294656   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.793465   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:48.293461   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.293549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:48.293980   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:48.793521   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.793585   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.793925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.293671   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.293755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.294085   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.793786   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.793857   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.794203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:50.293936   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.294005   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.294362   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:50.794095   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.794170   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.794494   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.294326   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.294720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:52.793945   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:53.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.293667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.293927   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:53.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.793852   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.794188   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.294005   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.294075   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.294426   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.794205   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.794284   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.794553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:54.794600   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:54.941002   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:55.004086   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:55.004129   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.004148   40272 retry.go:31] will retry after 20.918145005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.293488   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.293564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.293885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:55.793617   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.793707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.794018   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.293767   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.793648   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.793755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.794090   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:57.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.293891   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.294211   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:57.294263   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:57.710107   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:57.765891   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:57.765928   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.765947   40272 retry.go:31] will retry after 13.115816401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.793988   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.794063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.794301   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.294217   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.793430   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.793738   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.293442   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.293550   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.793871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:59.793930   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:00.295673   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.295757   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.296162   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:00.793971   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.794393   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.294295   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.294639   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.793817   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:02.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:02.293931   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:02.793522   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.793600   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.293690   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.293758   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.294007   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.793884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:04.293572   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:04.294031   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:04.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.793792   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.793473   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.793568   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.793916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.293673   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.293971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.793528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:06.793897   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:07.293734   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.293806   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.294152   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:07.793956   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.794035   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.794289   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.294051   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.294130   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.294477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.794232   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.794588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:08.794644   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:09.294344   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.294413   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.294705   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:09.793394   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.882157   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:10.938212   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:10.938272   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:10.938296   40272 retry.go:31] will retry after 16.990081142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:11.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.293533   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:11.293912   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:11.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.793893   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.293805   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.793829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:13.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.293887   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:13.293939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:13.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.793901   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.293451   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.293545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.793538   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.793612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.793947   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.293500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.293781   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:15.793881   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:15.923138   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:15.976380   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:15.979446   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:15.979475   40272 retry.go:31] will retry after 43.938975662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:16.293891   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.293966   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.294319   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:16.793918   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.794007   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.794273   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.293817   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.293889   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.294222   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.794224   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.794322   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.794659   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:17.794718   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:18.293644   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.293745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:18.793819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.793896   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.794214   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.294047   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.294429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.794155   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.794251   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.794516   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:20.294336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.294409   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.294750   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:20.294804   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:20.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.293392   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.793880   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.793814   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.794072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:22.794110   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:23.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.293552   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:23.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.793520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.293676   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.793402   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.793777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:25.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:25.293933   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:25.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.793822   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.293870   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.794001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.293786   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.293876   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:27.294188   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:27.794144   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.794229   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.928884   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:27.980862   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983877   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983967   40272 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:28.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.293635   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.293939   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:28.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.293888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:29.793943   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:30.293604   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.293690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.293949   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:30.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.793541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.793879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.293681   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.294045   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.793596   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:31.793973   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:32.293633   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.293736   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.294100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:32.794048   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.794127   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.794454   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.294107   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.294193   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.294469   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.794161   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.794241   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.794576   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:33.794630   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:34.294318   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.294390   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.294756   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:34.793348   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.793816   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.293934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.793853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:36.293403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.293796   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:36.293849   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:36.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.793604   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.793910   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.293819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.293921   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.294237   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.793992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.794062   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.794317   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:38.294129   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.294219   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.294552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:38.294607   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:38.794375   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.794449   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.794753   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.293464   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.793609   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.793726   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.793971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:40.794046   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:41.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.293783   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.294101   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:41.793762   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.793835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.794208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.293532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.793895   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.793974   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.794274   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:42.794330   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:43.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.293536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:43.793403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.793470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.793794   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.793570   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.793981   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:45.293992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.294153   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.294968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:45.295095   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:45.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.793517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.293433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.793672   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.794005   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.294181   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.794191   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.794264   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.794574   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:47.794634   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:48.294351   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.294414   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.294658   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:48.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.793458   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.293548   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.293622   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.793638   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.793723   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.793982   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:50.293669   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.293738   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.294063   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:50.294115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:50.793649   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.794030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.293404   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.293477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.793444   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.293605   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.293689   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.794056   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.794307   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:52.794355   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:53.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.294542   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:53.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.794789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.293367   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.293448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.793399   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:55.293465   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.293912   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:55.293970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:55.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.793748   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.293378   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.293444   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.293784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.793485   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:57.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.293823   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:57.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:57.794072   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.794142   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.294203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.294515   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.794402   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.794662   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.293346   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.293443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.793412   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:59.793894   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:59.919155   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:59.978732   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978768   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978842   40272 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:59.981270   40272 out.go:179] * Enabled addons: 
	I1202 19:19:59.984008   40272 addons.go:530] duration metric: took 1m52.724080055s for enable addons: enabled=[]
	I1202 19:20:00.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.319155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=25
	I1202 19:20:00.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.793581   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.293643   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.294269   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.794085   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:01.794475   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:02.294283   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.294801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:02.793839   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.793918   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.794224   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.293780   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.293848   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.294097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.793818   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.793890   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.794190   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:04.294069   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.294138   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.294439   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:04.294488   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:04.794180   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.794261   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.794525   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.294270   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.294339   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.294637   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.793358   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.793447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.793770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.794145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:06.794195   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:07.293975   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.294054   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.294413   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:07.794308   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.794425   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.794772   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.293671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.294020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:09.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.293769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:09.293828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:09.794253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.794326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.794686   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:11.293475   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.293548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:11.293934   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:11.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.293544   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.293610   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.293915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.793833   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.793916   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.794241   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:13.293799   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.293872   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.294179   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:13.294238   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:13.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.794022   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.794276   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.294026   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.294105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.294453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.794135   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.794207   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:15.294253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.294326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:15.294638   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:15.793355   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.793426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.793551   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.793621   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.293774   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.293867   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.794117   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.794213   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.794539   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:17.794594   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:18.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.294374   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:18.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.794070   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:20.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.293900   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:20.293961   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:20.793436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.293924   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.793463   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.793956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.293478   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.793771   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:22.793827   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:23.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:23.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.293436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.293506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:24.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:25.293608   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.293707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.294025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:25.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.794022   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:26.794082   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:27.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.293785   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.294032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:27.793959   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.294157   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.294237   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.294582   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.794354   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.794429   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.794706   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:28.794758   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:29.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:29.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.293432   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.293782   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.793582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:31.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.293580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:31.293985   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:31.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.793797   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.793874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.794194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:33.293954   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.294018   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.294268   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:33.294307   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:33.794022   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.794093   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.794394   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.294075   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.294145   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.294479   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.794081   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.794161   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.794411   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:35.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.294307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.294631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:35.294684   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:35.794291   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.794361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.794710   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.294383   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.294672   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.793869   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.293817   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.294175   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.794113   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.794365   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:37.794404   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:38.294151   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.294567   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:38.794364   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.794441   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.794795   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.794051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:40.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.293749   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:40.294131   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:40.793755   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.794137   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.293804   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.293874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.294208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.794044   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.794437   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:42.294271   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.294354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.294638   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:42.294682   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:42.793464   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.293529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.293884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.793555   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.793904   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.293677   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.793724   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.793796   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:44.794158   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:45.293768   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.293839   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.294135   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:45.794039   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.294279   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.294679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.793388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.793455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:47.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.293786   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.294051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:47.294093   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:47.794031   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.794101   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.294153   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.294227   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.294472   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.794239   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.794680   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.293461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.293815   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.793404   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.793801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:49.793850   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:50.293494   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.293926   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:50.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.793579   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.293925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.794124   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:51.794181   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:52.293850   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.293930   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.294277   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:52.794083   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.794149   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.794406   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.294121   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.294195   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.294529   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.794350   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.794679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:53.794733   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:54.293471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.293541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:54.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:56.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.293455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:56.293831   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:56.793498   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.793574   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.793934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.293700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.293941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.793858   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.793928   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.794244   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:58.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.294083   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.294416   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:58.294470   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:58.794152   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.794222   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.794483   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.294312   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.294645   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.794292   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.794364   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.794674   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.293476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.293799   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.793832   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:00.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:01.293577   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:01.793727   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.793804   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.293823   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.293903   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.294253   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.794285   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.794354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.794650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:02.794701   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:03.293400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.293470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:03.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.293824   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.793783   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:05.293327   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.293398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:05.293767   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:05.794396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.794464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.794774   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.293683   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.793543   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:07.293810   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.293905   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.294228   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:07.294294   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:07.794228   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.794296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.794557   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.294314   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.294391   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.294721   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.793513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.293515   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.793507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.793849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:09.793915   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:10.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.293946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:10.793633   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.793713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.794014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.293862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.293767   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:12.293819   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:12.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.293560   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.293641   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:14.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.293853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:14.293920   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:14.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.293520   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.293586   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.793540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.793613   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:16.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.293615   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:16.293998   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:16.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.293689   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.293770   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.793898   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.793968   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.794294   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:18.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.294082   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.294374   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:18.294428   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:18.794173   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.794258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.794584   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.294375   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.294447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.294755   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.793492   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.793769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.793542   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.793614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.793957   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:20.794013   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:21.293675   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.293740   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:21.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.293837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.793766   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.793836   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.794155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:22.794204   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:23.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:23.793615   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.794078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.793860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:25.293571   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.293642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.293963   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:25.294010   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:25.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.793479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.793840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.793506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:27.293759   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.294093   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:27.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:27.794030   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.794105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.794432   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.294126   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.294546   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.794342   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.794587   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.293336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.793558   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:29.794070   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:30.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.293704   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:30.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.793500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:32.293467   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.293899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:32.293955   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:32.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.793527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.293566   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.293634   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.793481   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.793759   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:34.793805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:35.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.293507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:35.793599   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.793691   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.293780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.793879   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.793947   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.794270   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:36.794327   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:37.294002   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.294382   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:37.794293   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.794366   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.794623   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.293793   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.793479   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.793551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.793911   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:39.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:39.293900   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:39.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.793400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.793469   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.293410   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.293820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.793779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:41.793832   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:42.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:42.793809   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.793881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.794230   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.794300   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.794607   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:43.794654   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:44.294246   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.294318   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:44.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.793399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.793724   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.793836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:46.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.293848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:46.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:46.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.793766   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.293717   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.294035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.793981   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.794397   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:48.293997   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.294340   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:48.294384   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:48.794112   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.794192   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.794535   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.294292   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.794401   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.794648   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.293343   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.293431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.293749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.793332   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.793431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.793733   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:50.793781   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:51.294382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.294749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:51.794404   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.794484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.794827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.793741   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.794061   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:52.794098   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:53.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.293502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.293842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:53.793547   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.793619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.293686   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.293772   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:55.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.293522   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:55.293916   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:55.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.793966   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.793700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.794037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:57.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.293812   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.294147   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:57.294199   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:57.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.794029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.794360   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.294144   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.294215   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.294530   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.794311   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.794384   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.794669   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.293382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.293457   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.793915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:59.793970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:00.294203   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.294291   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:00.794373   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.794448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.794765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.793408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:02.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.293521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.293831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:02.293882   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:02.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.793524   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.294092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.793779   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.793863   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:04.294013   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.294096   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.294427   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:04.294479   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:04.794192   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.794518   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.294290   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.294361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.294692   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.293537   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.293889   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.793886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:06.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:07.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.293561   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:07.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.794431   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.294315   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.793325   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.793395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:09.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:09.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:09.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.793938   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.293512   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.293605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.293914   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.793473   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:11.293419   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:11.293911   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:11.793571   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.793667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.793998   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.293707   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.294044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.794038   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.794457   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:13.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.294294   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.294608   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:13.294662   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:13.793319   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.793385   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.793631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.293401   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.793974   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.293634   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.293715   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.294019   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.793580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.793905   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:15.793957   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:16.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.293753   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.294105   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:16.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.794139   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.294035   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.294104   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.294447   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.794420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.794500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.794802   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:17.794864   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:18.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:18.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.793908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.793487   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:20.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:20.294043   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:20.793747   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.793818   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.293829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.294078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.793486   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.293599   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.293684   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.293961   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.793847   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.793919   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.794173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:22.794221   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:23.294004   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.294391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:23.794182   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.794569   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.294310   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.294382   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.294678   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:25.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.293849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:25.293899   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:25.793411   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.793784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.293511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:27.293716   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.293790   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:27.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:27.794020   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.794114   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.294228   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.294302   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.294604   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.794372   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.794442   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.793369   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.793452   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.793775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:29.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:30.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:30.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.793820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.293618   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.293975   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.793639   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.793724   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.794026   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:31.794076   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:32.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.293867   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:32.793458   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.793534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.293479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.293808   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.793577   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:34.293638   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.293733   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.294053   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:34.294138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:34.793757   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.794123   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.293805   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.293875   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.294212   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.793796   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.793870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.794183   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:36.293916   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.293981   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.294225   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:36.294266   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:36.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.794051   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.794349   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.294147   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.294225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.294553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.794437   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.794726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.293504   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.793561   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.793979   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:38.794037   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:39.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.293812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:39.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.793508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.293825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.793461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.793725   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:41.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:41.293919   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:41.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.306206   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.306286   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.306588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.793842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:43.293564   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:43.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:43.793719   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.794033   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.293420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.293840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.794225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.794573   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.293335   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.293432   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.293823   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.793584   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.793699   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.794020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:45.794077   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:46.293765   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.294194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:46.793979   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.294352   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.294421   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.294757   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.793514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:48.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.293488   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:48.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:48.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.793896   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.793746   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.794140   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:50.293958   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.294029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.294356   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:50.794160   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.794234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.794577   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.294330   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.294654   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.793400   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.293818   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.793765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:52.793817   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:53.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:53.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.793594   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.793990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.293543   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.293619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.293933   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.793885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:54.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:55.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.293897   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:55.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.793627   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.293469   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.293845   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.793575   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.793643   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.793943   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:56.793996   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:57.293776   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.293861   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:57.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.794158   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.294275   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.294346   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.294665   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.793386   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.793763   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:59.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.293903   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:59.293962   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:59.793451   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.793525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.296332   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.296406   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.296694   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.293498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.793424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:01.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:02.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.293637   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.294144   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:02.793976   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.794047   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.294017   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.294088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.294379   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.794118   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.794444   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:03.794495   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:04.294106   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.294176   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.294496   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:04.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.794365   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.794711   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.793605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.793941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:06.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.293719   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.294067   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:06.294117   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:06.793866   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.793938   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.293887   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.293967   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.294287   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.794150   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.794403   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:08.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.294258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.294594   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:08.294647   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:08.793335   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.793404   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.793760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.793478   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.293956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.793532   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.793599   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:10.793903   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:11.293547   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.293625   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:11.793691   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.793764   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.794076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.793673   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.794066   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:12.794115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:13.293795   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.293870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.294207   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:13.793969   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.794283   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.294039   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.294109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.294436   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.794094   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.794171   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.794488   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:14.794541   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:15.294282   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.294357   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.294611   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:15.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.794443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.794770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.293836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.793477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:17.293700   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:17.294109   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:17.793903   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.793973   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.794593   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.294328   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.294646   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.793322   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.793392   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.793726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.793807   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:19.793870   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:20.293525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.293596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:20.793525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.793601   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.793946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.293705   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.294002   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.793707   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.793780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.794097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:21.794151   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:22.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.293892   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.294246   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:22.794023   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.794088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.794347   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.294098   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.294169   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.294495   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.794344   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.794436   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.794764   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:23.794818   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:24.293402   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.293471   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:24.793418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.793495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.293624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.293973   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.793669   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.793735   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.793985   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:26.293681   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.293789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.294111   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:26.294163   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:26.793710   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.793789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.794114   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.293843   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.293914   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.294239   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.794080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.794155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.794487   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:28.294258   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.294337   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.294650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:28.294705   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:28.793349   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.793701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.294241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.294701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.293509   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.293886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:30.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:31.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:31.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.293492   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.293560   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:33.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.293569   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:33.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:33.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.293678   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.294103   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.793774   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.793844   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.794094   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:35.293808   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.293879   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.294203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:35.294261   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:35.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.794103   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.294141   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.294296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.794385   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.794791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.293721   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.293800   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.294132   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.794036   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.794297   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:37.794344   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:38.294080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.294155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.294482   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:38.794270   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.794347   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.794663   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.293411   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.793476   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.793548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.793865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:40.293455   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.293907   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:40.293963   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:40.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.293444   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.293898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.793891   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.793960   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:42.794326   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:43.294061   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.294133   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.294467   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:43.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.794316   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.294331   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.294411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.294778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.793422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:45.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.293631   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:45.293977   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:45.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.793835   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.293534   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.293612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.294003   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.793541   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.793611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.793878   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:47.293767   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.293837   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.294173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:47.294229   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:47.794221   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.293486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.293760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.293446   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.293944   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.793512   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:49.793918   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:50.293594   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.293685   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.294016   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:50.793739   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.293812   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.293881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.294164   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.793945   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.794024   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.794370   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:51.794425   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:52.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.294180   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.294514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:52.794387   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.794468   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.794736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.793588   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.793680   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.794035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:54.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:54.293865   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:54.793520   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.793596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.793859   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:56.293555   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.293632   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:56.294027   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:56.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.293744   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.293822   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.794034   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.794429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:58.294164   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.294240   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.294551   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:58.294605   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:58.794324   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.794395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.794640   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.293351   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.293426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.293726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.793529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:00.301671   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.301760   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.302092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:00.302138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:00.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.293581   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.293683   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.294068   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.293633   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.293968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.793760   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.793866   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.794174   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:02.794228   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:03.293986   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.294063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.296865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1202 19:24:03.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.793994   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.293692   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.293763   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.793833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:05.293536   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.293614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:05.294030   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:05.793675   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.794044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.293762   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.293838   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.794391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:07.294030   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.294116   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.298234   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1202 19:24:07.301805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:07.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.794025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:08.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:24:08.293509   40272 node_ready.go:38] duration metric: took 6m0.000285031s for node "functional-374330" to be "Ready" ...
	I1202 19:24:08.296878   40272 out.go:203] 
	W1202 19:24:08.299748   40272 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:24:08.299768   40272 out.go:285] * 
	W1202 19:24:08.301915   40272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:24:08.304698   40272 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697270209Z" level=info msg="Using the internal default seccomp profile"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697279554Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697285322Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697291254Z" level=info msg="RDT not available in the host system"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697303439Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.698232732Z" level=info msg="Conmon does support the --sync option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.69825349Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.698268071Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699067635Z" level=info msg="Conmon does support the --sync option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699096049Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699220091Z" level=info msg="Updated default CNI network name to "
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699755735Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.700119043Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.700176732Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746773976Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746817643Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746864067Z" level=info msg="Create NRI interface"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746984392Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746994263Z" level=info msg="runtime interface created"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747006185Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747012371Z" level=info msg="runtime interface starting up..."
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747021052Z" level=info msg="starting plugins..."
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747034639Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747104447Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:18:05 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:24:10.311035    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:10.311899    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:10.313645    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:10.314352    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:10.316097    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:24:10 up  1:06,  0 user,  load average: 0.12, 0.21, 0.33
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:24:08 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:08 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 02 19:24:08 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:08 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:08 functional-374330 kubelet[9158]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:08 functional-374330 kubelet[9158]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:08 functional-374330 kubelet[9158]: E1202 19:24:08.881843    9158 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:08 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:08 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:09 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 02 19:24:09 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:09 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:09 functional-374330 kubelet[9179]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:09 functional-374330 kubelet[9179]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:09 functional-374330 kubelet[9179]: E1202 19:24:09.618981    9179 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:09 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:09 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:10 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 02 19:24:10 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:10 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:10 functional-374330 kubelet[9267]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:10 functional-374330 kubelet[9267]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:10 functional-374330 kubelet[9267]: E1202 19:24:10.350162    9267 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:10 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:10 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (350.345395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-374330 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-374330 get po -A: exit status 1 (58.105761ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-374330 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-374330 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-374330 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (321.257213ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image save kicbase/echo-server:functional-535807 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image rm kicbase/echo-server:functional-535807 --alsologtostderr                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image save --daemon kicbase/echo-server:functional-535807 --alsologtostderr                                                             │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/test/nested/copy/4470/hosts                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/4470.pem                                                                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/4470.pem                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/44702.pem                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /usr/share/ca-certificates/44702.pem                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format short --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ ssh            │ functional-535807 ssh pgrep buildkitd                                                                                                                     │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ image          │ functional-535807 image ls --format yaml --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                                    │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                               │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                                   │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                                │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                                      │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                               │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:18:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:18:02.458749   40272 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:18:02.458868   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.458880   40272 out.go:374] Setting ErrFile to fd 2...
	I1202 19:18:02.458886   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.459160   40272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:18:02.459549   40272 out.go:368] Setting JSON to false
	I1202 19:18:02.460340   40272 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3621,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:18:02.460405   40272 start.go:143] virtualization:  
	I1202 19:18:02.464020   40272 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:18:02.467892   40272 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:18:02.467969   40272 notify.go:221] Checking for updates...
	I1202 19:18:02.474021   40272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:18:02.477064   40272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:02.480130   40272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:18:02.483164   40272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:18:02.486142   40272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:18:02.489587   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:02.489732   40272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:18:02.527318   40272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:18:02.527492   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.584790   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.575369586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.584902   40272 docker.go:319] overlay module found
	I1202 19:18:02.588038   40272 out.go:179] * Using the docker driver based on existing profile
	I1202 19:18:02.590861   40272 start.go:309] selected driver: docker
	I1202 19:18:02.590885   40272 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.591008   40272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:18:02.591102   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.644457   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.635623623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.644867   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:02.644933   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:02.644976   40272 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.648222   40272 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:18:02.651050   40272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:18:02.654072   40272 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:18:02.657154   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:02.657223   40272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:18:02.676274   40272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:18:02.676298   40272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:18:02.730421   40272 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:18:02.934277   40272 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:18:02.934463   40272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:18:02.934535   40272 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934623   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:18:02.934634   40272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.203µs
	I1202 19:18:02.934648   40272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:18:02.934660   40272 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934690   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:18:02.934695   40272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.324µs
	I1202 19:18:02.934701   40272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934707   40272 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:18:02.934711   40272 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934738   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:18:02.934736   40272 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934743   40272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.525µs
	I1202 19:18:02.934750   40272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934759   40272 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934774   40272 start.go:364] duration metric: took 25.468µs to acquireMachinesLock for "functional-374330"
	I1202 19:18:02.934787   40272 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:18:02.934789   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:18:02.934792   40272 fix.go:54] fixHost starting: 
	I1202 19:18:02.934794   40272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 35.864µs
	I1202 19:18:02.934800   40272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934809   40272 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934834   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:18:02.934845   40272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 19:18:02.934851   40272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934859   40272 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934885   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:18:02.934890   40272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.983µs
	I1202 19:18:02.934895   40272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:18:02.934913   40272 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934941   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:18:02.934946   40272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.707µs
	I1202 19:18:02.934951   40272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:18:02.934960   40272 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934985   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:18:02.934990   40272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.646µs
	I1202 19:18:02.934995   40272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:18:02.935015   40272 cache.go:87] Successfully saved all images to host disk.
	I1202 19:18:02.935074   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:02.953213   40272 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:18:02.953249   40272 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:18:02.956557   40272 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:18:02.956597   40272 machine.go:94] provisionDockerMachine start ...
	I1202 19:18:02.956677   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:02.973977   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:02.974301   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:02.974316   40272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:18:03.125393   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.125419   40272 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:18:03.125485   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.143103   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.143432   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.143449   40272 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:18:03.303153   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.303231   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.322823   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.323149   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.323170   40272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:18:03.473999   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:18:03.474027   40272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:18:03.474048   40272 ubuntu.go:190] setting up certificates
	I1202 19:18:03.474072   40272 provision.go:84] configureAuth start
	I1202 19:18:03.474137   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:03.492443   40272 provision.go:143] copyHostCerts
	I1202 19:18:03.492497   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492535   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:18:03.492553   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492631   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:18:03.492733   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492755   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:18:03.492763   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492791   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:18:03.492852   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492873   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:18:03.492880   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492905   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:18:03.492966   40272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:18:03.672249   40272 provision.go:177] copyRemoteCerts
	I1202 19:18:03.672315   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:18:03.672360   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.690216   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:03.793601   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:18:03.793730   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:18:03.811690   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:18:03.811788   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:18:03.829853   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:18:03.829937   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:18:03.847063   40272 provision.go:87] duration metric: took 372.963339ms to configureAuth
	I1202 19:18:03.847135   40272 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:18:03.847323   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:03.847434   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.865504   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.865829   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.865845   40272 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:18:04.201120   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:18:04.201145   40272 machine.go:97] duration metric: took 1.244539118s to provisionDockerMachine
	I1202 19:18:04.201156   40272 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:18:04.201184   40272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:18:04.201288   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:18:04.201334   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.219464   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.321684   40272 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:18:04.325089   40272 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 19:18:04.325149   40272 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 19:18:04.325168   40272 command_runner.go:130] > VERSION_ID="12"
	I1202 19:18:04.325186   40272 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 19:18:04.325207   40272 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 19:18:04.325237   40272 command_runner.go:130] > ID=debian
	I1202 19:18:04.325255   40272 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 19:18:04.325286   40272 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 19:18:04.325319   40272 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 19:18:04.325987   40272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:18:04.326040   40272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:18:04.326062   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:18:04.326146   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:18:04.326256   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:18:04.326282   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:18:04.326394   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:18:04.326431   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> /etc/test/nested/copy/4470/hosts
	I1202 19:18:04.326515   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:18:04.334852   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:04.354617   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:18:04.371951   40272 start.go:296] duration metric: took 170.764596ms for postStartSetup
	I1202 19:18:04.372028   40272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:18:04.372100   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.388603   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.485826   40272 command_runner.go:130] > 12%
	I1202 19:18:04.486229   40272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:18:04.490474   40272 command_runner.go:130] > 172G
	I1202 19:18:04.490820   40272 fix.go:56] duration metric: took 1.556023913s for fixHost
	I1202 19:18:04.490841   40272 start.go:83] releasing machines lock for "functional-374330", held for 1.55605912s
	I1202 19:18:04.490913   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:04.507171   40272 ssh_runner.go:195] Run: cat /version.json
	I1202 19:18:04.507212   40272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:18:04.507223   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.507284   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.524406   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.524835   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.718816   40272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 19:18:04.718877   40272 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 19:18:04.719015   40272 ssh_runner.go:195] Run: systemctl --version
	I1202 19:18:04.724818   40272 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 19:18:04.724852   40272 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 19:18:04.725306   40272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:18:04.761633   40272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 19:18:04.765941   40272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 19:18:04.765984   40272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:18:04.766036   40272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:18:04.775671   40272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:18:04.775697   40272 start.go:496] detecting cgroup driver to use...
	I1202 19:18:04.775733   40272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:18:04.775798   40272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:18:04.790690   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:18:04.805178   40272 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:18:04.805246   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:18:04.821173   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:18:04.835737   40272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:18:04.950984   40272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:18:05.087151   40272 docker.go:234] disabling docker service ...
	I1202 19:18:05.087235   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:18:05.103857   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:18:05.118486   40272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:18:05.244193   40272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:18:05.357860   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:18:05.370494   40272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:18:05.383221   40272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 19:18:05.384408   40272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:18:05.384504   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.393298   40272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:18:05.393384   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.402265   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.411107   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.420227   40272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:18:05.428585   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.437313   40272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.445677   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.454485   40272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:18:05.461070   40272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 19:18:05.462061   40272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:18:05.469806   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:05.580364   40272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:18:05.753810   40272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:18:05.753880   40272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:18:05.759122   40272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 19:18:05.759148   40272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 19:18:05.759155   40272 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 19:18:05.759163   40272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:05.759168   40272 command_runner.go:130] > Access: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759176   40272 command_runner.go:130] > Modify: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759183   40272 command_runner.go:130] > Change: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759187   40272 command_runner.go:130] >  Birth: -
	I1202 19:18:05.759949   40272 start.go:564] Will wait 60s for crictl version
	I1202 19:18:05.760004   40272 ssh_runner.go:195] Run: which crictl
	I1202 19:18:05.764137   40272 command_runner.go:130] > /usr/local/bin/crictl
	I1202 19:18:05.765127   40272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:18:05.790594   40272 command_runner.go:130] > Version:  0.1.0
	I1202 19:18:05.790618   40272 command_runner.go:130] > RuntimeName:  cri-o
	I1202 19:18:05.790833   40272 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 19:18:05.791045   40272 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 19:18:05.793417   40272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:18:05.793500   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.827591   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.827617   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.827624   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.827633   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.827640   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.827654   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.827661   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.827671   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.827679   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.827682   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.827686   40272 command_runner.go:130] >      static
	I1202 19:18:05.827702   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.827705   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.827713   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.827719   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.827727   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.827733   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.827740   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.827750   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.827762   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.829485   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.856217   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.856241   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.856248   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.856254   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.856260   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.856264   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.856268   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.856272   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.856277   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.856281   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.856285   40272 command_runner.go:130] >      static
	I1202 19:18:05.856288   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.856292   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.856297   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.856300   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.856307   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.856311   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.856315   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.856333   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.856342   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.862922   40272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:18:05.865574   40272 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:18:05.881617   40272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:18:05.885365   40272 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 19:18:05.885465   40272 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:18:05.885585   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:05.885631   40272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:18:05.915386   40272 command_runner.go:130] > {
	I1202 19:18:05.915407   40272 command_runner.go:130] >   "images":  [
	I1202 19:18:05.915412   40272 command_runner.go:130] >     {
	I1202 19:18:05.915425   40272 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 19:18:05.915430   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915436   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 19:18:05.915440   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915443   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915458   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 19:18:05.915465   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915469   40272 command_runner.go:130] >       "size":  "29035622",
	I1202 19:18:05.915474   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915478   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915484   40272 command_runner.go:130] >     },
	I1202 19:18:05.915487   40272 command_runner.go:130] >     {
	I1202 19:18:05.915494   40272 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 19:18:05.915501   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915507   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 19:18:05.915511   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915523   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915531   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 19:18:05.915535   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915542   40272 command_runner.go:130] >       "size":  "74488375",
	I1202 19:18:05.915547   40272 command_runner.go:130] >       "username":  "nonroot",
	I1202 19:18:05.915550   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915553   40272 command_runner.go:130] >     },
	I1202 19:18:05.915562   40272 command_runner.go:130] >     {
	I1202 19:18:05.915572   40272 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 19:18:05.915585   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915590   40272 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 19:18:05.915593   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915597   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915618   40272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 19:18:05.915626   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915630   40272 command_runner.go:130] >       "size":  "60854229",
	I1202 19:18:05.915634   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915637   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915641   40272 command_runner.go:130] >       },
	I1202 19:18:05.915645   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915652   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915661   40272 command_runner.go:130] >     },
	I1202 19:18:05.915666   40272 command_runner.go:130] >     {
	I1202 19:18:05.915681   40272 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 19:18:05.915686   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915691   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 19:18:05.915697   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915702   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915710   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 19:18:05.915713   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915718   40272 command_runner.go:130] >       "size":  "84947242",
	I1202 19:18:05.915721   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915725   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915728   40272 command_runner.go:130] >       },
	I1202 19:18:05.915736   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915743   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915746   40272 command_runner.go:130] >     },
	I1202 19:18:05.915750   40272 command_runner.go:130] >     {
	I1202 19:18:05.915756   40272 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 19:18:05.915762   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915771   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 19:18:05.915778   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915782   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915790   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 19:18:05.915797   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915805   40272 command_runner.go:130] >       "size":  "72167568",
	I1202 19:18:05.915809   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915813   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915816   40272 command_runner.go:130] >       },
	I1202 19:18:05.915820   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915824   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915828   40272 command_runner.go:130] >     },
	I1202 19:18:05.915831   40272 command_runner.go:130] >     {
	I1202 19:18:05.915841   40272 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 19:18:05.915852   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915858   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 19:18:05.915861   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915866   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915880   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 19:18:05.915883   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915887   40272 command_runner.go:130] >       "size":  "74105124",
	I1202 19:18:05.915891   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915896   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915902   40272 command_runner.go:130] >     },
	I1202 19:18:05.915906   40272 command_runner.go:130] >     {
	I1202 19:18:05.915912   40272 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 19:18:05.915917   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915925   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 19:18:05.915930   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915934   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915943   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 19:18:05.915949   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915953   40272 command_runner.go:130] >       "size":  "49819792",
	I1202 19:18:05.915961   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915968   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915972   40272 command_runner.go:130] >       },
	I1202 19:18:05.915976   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915982   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915988   40272 command_runner.go:130] >     },
	I1202 19:18:05.915992   40272 command_runner.go:130] >     {
	I1202 19:18:05.915999   40272 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 19:18:05.916003   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.916010   40272 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.916014   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916018   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.916027   40272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 19:18:05.916043   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916046   40272 command_runner.go:130] >       "size":  "517328",
	I1202 19:18:05.916049   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.916054   40272 command_runner.go:130] >         "value":  "65535"
	I1202 19:18:05.916064   40272 command_runner.go:130] >       },
	I1202 19:18:05.916068   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.916072   40272 command_runner.go:130] >       "pinned":  true
	I1202 19:18:05.916075   40272 command_runner.go:130] >     }
	I1202 19:18:05.916078   40272 command_runner.go:130] >   ]
	I1202 19:18:05.916081   40272 command_runner.go:130] > }
	I1202 19:18:05.916221   40272 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:18:05.916234   40272 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:18:05.916241   40272 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:18:05.916331   40272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:18:05.916421   40272 ssh_runner.go:195] Run: crio config
	I1202 19:18:05.964092   40272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 19:18:05.964119   40272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 19:18:05.964127   40272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 19:18:05.964130   40272 command_runner.go:130] > #
	I1202 19:18:05.964138   40272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 19:18:05.964149   40272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 19:18:05.964156   40272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 19:18:05.964166   40272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 19:18:05.964176   40272 command_runner.go:130] > # reload'.
	I1202 19:18:05.964182   40272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 19:18:05.964189   40272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 19:18:05.964197   40272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 19:18:05.964204   40272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 19:18:05.964210   40272 command_runner.go:130] > [crio]
	I1202 19:18:05.964216   40272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 19:18:05.964223   40272 command_runner.go:130] > # containers images, in this directory.
	I1202 19:18:05.964661   40272 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 19:18:05.964681   40272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 19:18:05.965195   40272 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 19:18:05.965213   40272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 19:18:05.965585   40272 command_runner.go:130] > # imagestore = ""
	I1202 19:18:05.965601   40272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 19:18:05.965614   40272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 19:18:05.966162   40272 command_runner.go:130] > # storage_driver = "overlay"
	I1202 19:18:05.966179   40272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 19:18:05.966186   40272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 19:18:05.966362   40272 command_runner.go:130] > # storage_option = [
	I1202 19:18:05.966573   40272 command_runner.go:130] > # ]
	I1202 19:18:05.966591   40272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 19:18:05.966598   40272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 19:18:05.966880   40272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 19:18:05.966894   40272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 19:18:05.966902   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 19:18:05.966914   40272 command_runner.go:130] > # always happen on a node reboot
	I1202 19:18:05.967066   40272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 19:18:05.967095   40272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 19:18:05.967102   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 19:18:05.967107   40272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 19:18:05.967213   40272 command_runner.go:130] > # version_file_persist = ""
	I1202 19:18:05.967225   40272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 19:18:05.967234   40272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 19:18:05.967423   40272 command_runner.go:130] > # internal_wipe = true
	I1202 19:18:05.967436   40272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 19:18:05.967449   40272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 19:18:05.967580   40272 command_runner.go:130] > # internal_repair = true
	I1202 19:18:05.967590   40272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 19:18:05.967596   40272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 19:18:05.967602   40272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 19:18:05.967753   40272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 19:18:05.967764   40272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 19:18:05.967767   40272 command_runner.go:130] > [crio.api]
	I1202 19:18:05.967773   40272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 19:18:05.967953   40272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 19:18:05.967969   40272 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 19:18:05.968134   40272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 19:18:05.968145   40272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 19:18:05.968169   40272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 19:18:05.968297   40272 command_runner.go:130] > # stream_port = "0"
	I1202 19:18:05.968307   40272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 19:18:05.968473   40272 command_runner.go:130] > # stream_enable_tls = false
	I1202 19:18:05.968483   40272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 19:18:05.968653   40272 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 19:18:05.968663   40272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 19:18:05.968669   40272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968775   40272 command_runner.go:130] > # stream_tls_cert = ""
	I1202 19:18:05.968785   40272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 19:18:05.968792   40272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968905   40272 command_runner.go:130] > # stream_tls_key = ""
	I1202 19:18:05.968915   40272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 19:18:05.968922   40272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 19:18:05.968926   40272 command_runner.go:130] > # automatically pick up the changes.
	I1202 19:18:05.969055   40272 command_runner.go:130] > # stream_tls_ca = ""
	I1202 19:18:05.969084   40272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969257   40272 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 19:18:05.969270   40272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969439   40272 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 19:18:05.969511   40272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 19:18:05.969528   40272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 19:18:05.969532   40272 command_runner.go:130] > [crio.runtime]
	I1202 19:18:05.969539   40272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 19:18:05.969544   40272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 19:18:05.969548   40272 command_runner.go:130] > # "nofile=1024:2048"
	I1202 19:18:05.969554   40272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 19:18:05.969676   40272 command_runner.go:130] > # default_ulimits = [
	I1202 19:18:05.969684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.969691   40272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 19:18:05.969900   40272 command_runner.go:130] > # no_pivot = false
	I1202 19:18:05.969912   40272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 19:18:05.969920   40272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 19:18:05.970109   40272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 19:18:05.970119   40272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 19:18:05.970124   40272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 19:18:05.970131   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970227   40272 command_runner.go:130] > # conmon = ""
	I1202 19:18:05.970236   40272 command_runner.go:130] > # Cgroup setting for conmon
	I1202 19:18:05.970244   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 19:18:05.970379   40272 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 19:18:05.970389   40272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 19:18:05.970395   40272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 19:18:05.970403   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970521   40272 command_runner.go:130] > # conmon_env = [
	I1202 19:18:05.970671   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970681   40272 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 19:18:05.970687   40272 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 19:18:05.970693   40272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 19:18:05.970697   40272 command_runner.go:130] > # default_env = [
	I1202 19:18:05.970827   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970837   40272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 19:18:05.970846   40272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 19:18:05.970995   40272 command_runner.go:130] > # selinux = false
	I1202 19:18:05.971005   40272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 19:18:05.971014   40272 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 19:18:05.971019   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971123   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.971133   40272 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 19:18:05.971140   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971283   40272 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 19:18:05.971297   40272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 19:18:05.971349   40272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 19:18:05.971394   40272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 19:18:05.971420   40272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 19:18:05.971426   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971532   40272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 19:18:05.971542   40272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 19:18:05.971554   40272 command_runner.go:130] > # the cgroup blockio controller.
	I1202 19:18:05.971691   40272 command_runner.go:130] > # blockio_config_file = ""
	I1202 19:18:05.971702   40272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 19:18:05.971706   40272 command_runner.go:130] > # blockio parameters.
	I1202 19:18:05.971888   40272 command_runner.go:130] > # blockio_reload = false
	I1202 19:18:05.971899   40272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 19:18:05.971911   40272 command_runner.go:130] > # irqbalance daemon.
	I1202 19:18:05.972089   40272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 19:18:05.972099   40272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 19:18:05.972107   40272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 19:18:05.972118   40272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 19:18:05.972238   40272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 19:18:05.972249   40272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 19:18:05.972255   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.972373   40272 command_runner.go:130] > # rdt_config_file = ""
	I1202 19:18:05.972382   40272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 19:18:05.972510   40272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 19:18:05.972521   40272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 19:18:05.972668   40272 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 19:18:05.972679   40272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 19:18:05.972686   40272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 19:18:05.972689   40272 command_runner.go:130] > # will be added.
	I1202 19:18:05.972804   40272 command_runner.go:130] > # default_capabilities = [
	I1202 19:18:05.972909   40272 command_runner.go:130] > # 	"CHOWN",
	I1202 19:18:05.973035   40272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 19:18:05.973186   40272 command_runner.go:130] > # 	"FSETID",
	I1202 19:18:05.973194   40272 command_runner.go:130] > # 	"FOWNER",
	I1202 19:18:05.973322   40272 command_runner.go:130] > # 	"SETGID",
	I1202 19:18:05.973468   40272 command_runner.go:130] > # 	"SETUID",
	I1202 19:18:05.973500   40272 command_runner.go:130] > # 	"SETPCAP",
	I1202 19:18:05.973632   40272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 19:18:05.973847   40272 command_runner.go:130] > # 	"KILL",
	I1202 19:18:05.973855   40272 command_runner.go:130] > # ]
	I1202 19:18:05.973864   40272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 19:18:05.973870   40272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 19:18:05.974039   40272 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 19:18:05.974052   40272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 19:18:05.974059   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974062   40272 command_runner.go:130] > default_sysctls = [
	I1202 19:18:05.974148   40272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 19:18:05.974179   40272 command_runner.go:130] > ]
	I1202 19:18:05.974185   40272 command_runner.go:130] > # List of devices on the host that a
	I1202 19:18:05.974297   40272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 19:18:05.974459   40272 command_runner.go:130] > # allowed_devices = [
	I1202 19:18:05.974492   40272 command_runner.go:130] > # 	"/dev/fuse",
	I1202 19:18:05.974497   40272 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 19:18:05.974500   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974505   40272 command_runner.go:130] > # List of additional devices. specified as
	I1202 19:18:05.974517   40272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 19:18:05.974706   40272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 19:18:05.974717   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974722   40272 command_runner.go:130] > # additional_devices = [
	I1202 19:18:05.974730   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974735   40272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 19:18:05.974870   40272 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 19:18:05.975061   40272 command_runner.go:130] > # 	"/etc/cdi",
	I1202 19:18:05.975069   40272 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 19:18:05.975204   40272 command_runner.go:130] > # ]
	I1202 19:18:05.975337   40272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 19:18:05.975610   40272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 19:18:05.975708   40272 command_runner.go:130] > # Defaults to false.
	I1202 19:18:05.975730   40272 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 19:18:05.975766   40272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 19:18:05.975927   40272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 19:18:05.976135   40272 command_runner.go:130] > # hooks_dir = [
	I1202 19:18:05.976173   40272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 19:18:05.976199   40272 command_runner.go:130] > # ]
	I1202 19:18:05.976222   40272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 19:18:05.976257   40272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 19:18:05.976344   40272 command_runner.go:130] > # its default mounts from the following two files:
	I1202 19:18:05.976363   40272 command_runner.go:130] > #
	I1202 19:18:05.976438   40272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 19:18:05.976465   40272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 19:18:05.976485   40272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 19:18:05.976561   40272 command_runner.go:130] > #
	I1202 19:18:05.976637   40272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 19:18:05.976658   40272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 19:18:05.976681   40272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 19:18:05.976711   40272 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 19:18:05.976797   40272 command_runner.go:130] > #
	I1202 19:18:05.976852   40272 command_runner.go:130] > # default_mounts_file = ""
	I1202 19:18:05.976886   40272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 19:18:05.976912   40272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 19:18:05.976930   40272 command_runner.go:130] > # pids_limit = -1
	I1202 19:18:05.977014   40272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 19:18:05.977040   40272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 19:18:05.977112   40272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 19:18:05.977136   40272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 19:18:05.977153   40272 command_runner.go:130] > # log_size_max = -1
	I1202 19:18:05.977240   40272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 19:18:05.977264   40272 command_runner.go:130] > # log_to_journald = false
	I1202 19:18:05.977344   40272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 19:18:05.977370   40272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 19:18:05.977390   40272 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 19:18:05.977478   40272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 19:18:05.977500   40272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 19:18:05.977570   40272 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 19:18:05.977596   40272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 19:18:05.977614   40272 command_runner.go:130] > # read_only = false
	I1202 19:18:05.977722   40272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 19:18:05.977797   40272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 19:18:05.977817   40272 command_runner.go:130] > # live configuration reload.
	I1202 19:18:05.977836   40272 command_runner.go:130] > # log_level = "info"
	I1202 19:18:05.977872   40272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 19:18:05.977956   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.978011   40272 command_runner.go:130] > # log_filter = ""
	I1202 19:18:05.978051   40272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978073   40272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 19:18:05.978093   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978128   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978214   40272 command_runner.go:130] > # uid_mappings = ""
	I1202 19:18:05.978236   40272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978257   40272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 19:18:05.978338   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978377   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978397   40272 command_runner.go:130] > # gid_mappings = ""
	I1202 19:18:05.978483   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 19:18:05.978556   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978583   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978606   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978700   40272 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 19:18:05.978728   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 19:18:05.978805   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978827   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978909   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978941   40272 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 19:18:05.979022   40272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 19:18:05.979049   40272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 19:18:05.979139   40272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 19:18:05.979164   40272 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 19:18:05.979239   40272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 19:18:05.979264   40272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 19:18:05.979291   40272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 19:18:05.979376   40272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 19:18:05.979411   40272 command_runner.go:130] > # drop_infra_ctr = true
	I1202 19:18:05.979493   40272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 19:18:05.979517   40272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 19:18:05.979541   40272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 19:18:05.979625   40272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 19:18:05.979649   40272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 19:18:05.979723   40272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 19:18:05.979744   40272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 19:18:05.979763   40272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 19:18:05.979845   40272 command_runner.go:130] > # shared_cpuset = ""
	I1202 19:18:05.979867   40272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 19:18:05.979937   40272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 19:18:05.979961   40272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 19:18:05.979983   40272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 19:18:05.980069   40272 command_runner.go:130] > # pinns_path = ""
	I1202 19:18:05.980091   40272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 19:18:05.980113   40272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 19:18:05.980205   40272 command_runner.go:130] > # enable_criu_support = true
	I1202 19:18:05.980225   40272 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 19:18:05.980246   40272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 19:18:05.980337   40272 command_runner.go:130] > # enable_pod_events = false
	I1202 19:18:05.980364   40272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 19:18:05.980435   40272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 19:18:05.980456   40272 command_runner.go:130] > # default_runtime = "crun"
	I1202 19:18:05.980476   40272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 19:18:05.980567   40272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 19:18:05.980641   40272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 19:18:05.980666   40272 command_runner.go:130] > # creation as a file is not desired either.
	I1202 19:18:05.980689   40272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 19:18:05.980782   40272 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 19:18:05.980807   40272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 19:18:05.980885   40272 command_runner.go:130] > # ]
	I1202 19:18:05.980907   40272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 19:18:05.980989   40272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 19:18:05.981060   40272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 19:18:05.981080   40272 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 19:18:05.981155   40272 command_runner.go:130] > #
	I1202 19:18:05.981180   40272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 19:18:05.981237   40272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 19:18:05.981273   40272 command_runner.go:130] > # runtime_type = "oci"
	I1202 19:18:05.981291   40272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 19:18:05.981311   40272 command_runner.go:130] > # inherit_default_runtime = false
	I1202 19:18:05.981423   40272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 19:18:05.981442   40272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 19:18:05.981461   40272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 19:18:05.981479   40272 command_runner.go:130] > # monitor_env = []
	I1202 19:18:05.981507   40272 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 19:18:05.981530   40272 command_runner.go:130] > # allowed_annotations = []
	I1202 19:18:05.981553   40272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 19:18:05.981571   40272 command_runner.go:130] > # no_sync_log = false
	I1202 19:18:05.981591   40272 command_runner.go:130] > # default_annotations = {}
	I1202 19:18:05.981620   40272 command_runner.go:130] > # stream_websockets = false
	I1202 19:18:05.981644   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.981733   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.981765   40272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 19:18:05.981785   40272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 19:18:05.981807   40272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 19:18:05.981914   40272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 19:18:05.981934   40272 command_runner.go:130] > #   in $PATH.
	I1202 19:18:05.981954   40272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 19:18:05.981989   40272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 19:18:05.982017   40272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 19:18:05.982034   40272 command_runner.go:130] > #   state.
	I1202 19:18:05.982057   40272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 19:18:05.982098   40272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 19:18:05.982128   40272 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 19:18:05.982148   40272 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 19:18:05.982168   40272 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 19:18:05.982199   40272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 19:18:05.982235   40272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 19:18:05.982255   40272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 19:18:05.982277   40272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 19:18:05.982307   40272 command_runner.go:130] > #   The currently recognized values are:
	I1202 19:18:05.982329   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 19:18:05.983678   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 19:18:05.983703   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 19:18:05.983795   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 19:18:05.983829   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 19:18:05.983905   40272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 19:18:05.983938   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 19:18:05.983958   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 19:18:05.983978   40272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 19:18:05.984011   40272 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 19:18:05.984040   40272 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 19:18:05.984061   40272 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 19:18:05.984082   40272 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 19:18:05.984114   40272 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 19:18:05.984143   40272 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 19:18:05.984168   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 19:18:05.984191   40272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 19:18:05.984220   40272 command_runner.go:130] > #   deprecated option "conmon".
	I1202 19:18:05.984244   40272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 19:18:05.984265   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 19:18:05.984298   40272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 19:18:05.984320   40272 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 19:18:05.984343   40272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 19:18:05.984373   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 19:18:05.984413   40272 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 19:18:05.984432   40272 command_runner.go:130] > #   conmon-rs by using:
	I1202 19:18:05.984470   40272 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 19:18:05.984495   40272 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 19:18:05.984515   40272 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 19:18:05.984549   40272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 19:18:05.984571   40272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 19:18:05.984595   40272 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 19:18:05.984630   40272 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 19:18:05.984653   40272 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 19:18:05.984677   40272 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 19:18:05.984716   40272 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 19:18:05.984737   40272 command_runner.go:130] > #   when a machine crash happens.
	I1202 19:18:05.984765   40272 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 19:18:05.984801   40272 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 19:18:05.984825   40272 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 19:18:05.984846   40272 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 19:18:05.984877   40272 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 19:18:05.984902   40272 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 19:18:05.984921   40272 command_runner.go:130] > #
	I1202 19:18:05.984958   40272 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 19:18:05.984976   40272 command_runner.go:130] > #
	I1202 19:18:05.984996   40272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 19:18:05.985026   40272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 19:18:05.985052   40272 command_runner.go:130] > #
	I1202 19:18:05.985075   40272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 19:18:05.985099   40272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 19:18:05.985125   40272 command_runner.go:130] > #
	I1202 19:18:05.985149   40272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 19:18:05.985169   40272 command_runner.go:130] > # feature.
	I1202 19:18:05.985199   40272 command_runner.go:130] > #
	I1202 19:18:05.985224   40272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 19:18:05.985244   40272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 19:18:05.985274   40272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 19:18:05.985304   40272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 19:18:05.985329   40272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 19:18:05.985349   40272 command_runner.go:130] > #
	I1202 19:18:05.985381   40272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 19:18:05.985404   40272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 19:18:05.985422   40272 command_runner.go:130] > #
	I1202 19:18:05.985454   40272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 19:18:05.985482   40272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 19:18:05.985497   40272 command_runner.go:130] > #
	I1202 19:18:05.985518   40272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 19:18:05.985550   40272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 19:18:05.985582   40272 command_runner.go:130] > # limitation.
	I1202 19:18:05.985602   40272 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 19:18:05.985622   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 19:18:05.985670   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985689   40272 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 19:18:05.985704   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985709   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985725   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985731   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985741   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985745   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985749   40272 command_runner.go:130] > allowed_annotations = [
	I1202 19:18:05.985754   40272 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 19:18:05.985759   40272 command_runner.go:130] > ]
	I1202 19:18:05.985765   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985769   40272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 19:18:05.985782   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 19:18:05.985786   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985795   40272 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 19:18:05.985801   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985810   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985821   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985829   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985833   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985837   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985845   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985852   40272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 19:18:05.985860   40272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 19:18:05.985867   40272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 19:18:05.985881   40272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 19:18:05.985892   40272 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 19:18:05.985905   40272 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 19:18:05.985915   40272 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 19:18:05.985926   40272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 19:18:05.985936   40272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 19:18:05.985947   40272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 19:18:05.985953   40272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 19:18:05.985964   40272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 19:18:05.985968   40272 command_runner.go:130] > # Example:
	I1202 19:18:05.985975   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 19:18:05.985980   40272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 19:18:05.985987   40272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 19:18:05.985993   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 19:18:05.985996   40272 command_runner.go:130] > # cpuset = "0-1"
	I1202 19:18:05.986000   40272 command_runner.go:130] > # cpushares = "5"
	I1202 19:18:05.986007   40272 command_runner.go:130] > # cpuquota = "1000"
	I1202 19:18:05.986011   40272 command_runner.go:130] > # cpuperiod = "100000"
	I1202 19:18:05.986014   40272 command_runner.go:130] > # cpulimit = "35"
	I1202 19:18:05.986018   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.986025   40272 command_runner.go:130] > # The workload name is workload-type.
	I1202 19:18:05.986033   40272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 19:18:05.986041   40272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 19:18:05.986047   40272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 19:18:05.986057   40272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 19:18:05.986069   40272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 19:18:05.986075   40272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 19:18:05.986082   40272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 19:18:05.986086   40272 command_runner.go:130] > # Default value is set to true
	I1202 19:18:05.986096   40272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 19:18:05.986102   40272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 19:18:05.986107   40272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 19:18:05.986117   40272 command_runner.go:130] > # Default value is set to 'false'
	I1202 19:18:05.986121   40272 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 19:18:05.986127   40272 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 19:18:05.986137   40272 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 19:18:05.986142   40272 command_runner.go:130] > # timezone = ""
	I1202 19:18:05.986151   40272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 19:18:05.986154   40272 command_runner.go:130] > #
	I1202 19:18:05.986160   40272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 19:18:05.986171   40272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 19:18:05.986178   40272 command_runner.go:130] > [crio.image]
	I1202 19:18:05.986184   40272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 19:18:05.986189   40272 command_runner.go:130] > # default_transport = "docker://"
	I1202 19:18:05.986197   40272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 19:18:05.986205   40272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986212   40272 command_runner.go:130] > # global_auth_file = ""
	I1202 19:18:05.986217   40272 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 19:18:05.986223   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986230   40272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.986237   40272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 19:18:05.986243   40272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986248   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986255   40272 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 19:18:05.986260   40272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 19:18:05.986266   40272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 19:18:05.986275   40272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 19:18:05.986281   40272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 19:18:05.986291   40272 command_runner.go:130] > # pause_command = "/pause"
	I1202 19:18:05.986301   40272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 19:18:05.986309   40272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 19:18:05.986319   40272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 19:18:05.986324   40272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 19:18:05.986331   40272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 19:18:05.986337   40272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 19:18:05.986343   40272 command_runner.go:130] > # pinned_images = [
	I1202 19:18:05.986346   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986352   40272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 19:18:05.986360   40272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 19:18:05.986367   40272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 19:18:05.986376   40272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 19:18:05.986381   40272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 19:18:05.986388   40272 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 19:18:05.986394   40272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 19:18:05.986401   40272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 19:18:05.986415   40272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 19:18:05.986422   40272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 19:18:05.986431   40272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 19:18:05.986436   40272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 19:18:05.986442   40272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 19:18:05.986452   40272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 19:18:05.986456   40272 command_runner.go:130] > # changing them here.
	I1202 19:18:05.986462   40272 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 19:18:05.986468   40272 command_runner.go:130] > # insecure_registries = [
	I1202 19:18:05.986472   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986478   40272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 19:18:05.986486   40272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 19:18:05.986490   40272 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 19:18:05.986495   40272 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 19:18:05.986499   40272 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 19:18:05.986505   40272 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 19:18:05.986518   40272 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 19:18:05.986525   40272 command_runner.go:130] > # auto_reload_registries = false
	I1202 19:18:05.986531   40272 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 19:18:05.986543   40272 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 19:18:05.986549   40272 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 19:18:05.986556   40272 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 19:18:05.986561   40272 command_runner.go:130] > # The mode of short name resolution.
	I1202 19:18:05.986568   40272 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 19:18:05.986578   40272 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 19:18:05.986583   40272 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 19:18:05.986588   40272 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 19:18:05.986593   40272 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 19:18:05.986602   40272 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 19:18:05.986606   40272 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 19:18:05.986612   40272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 19:18:05.986619   40272 command_runner.go:130] > # CNI plugins.
	I1202 19:18:05.986623   40272 command_runner.go:130] > [crio.network]
	I1202 19:18:05.986629   40272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 19:18:05.986637   40272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 19:18:05.986640   40272 command_runner.go:130] > # cni_default_network = ""
	I1202 19:18:05.986646   40272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 19:18:05.986655   40272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 19:18:05.986661   40272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 19:18:05.986664   40272 command_runner.go:130] > # plugin_dirs = [
	I1202 19:18:05.986668   40272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 19:18:05.986674   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986678   40272 command_runner.go:130] > # List of included pod metrics.
	I1202 19:18:05.986681   40272 command_runner.go:130] > # included_pod_metrics = [
	I1202 19:18:05.986684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986690   40272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 19:18:05.986696   40272 command_runner.go:130] > [crio.metrics]
	I1202 19:18:05.986701   40272 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 19:18:05.986705   40272 command_runner.go:130] > # enable_metrics = false
	I1202 19:18:05.986718   40272 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 19:18:05.986723   40272 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 19:18:05.986732   40272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 19:18:05.986738   40272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 19:18:05.986744   40272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 19:18:05.986748   40272 command_runner.go:130] > # metrics_collectors = [
	I1202 19:18:05.986753   40272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 19:18:05.986760   40272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 19:18:05.986764   40272 command_runner.go:130] > # 	"containers_oom_total",
	I1202 19:18:05.986768   40272 command_runner.go:130] > # 	"processes_defunct",
	I1202 19:18:05.986777   40272 command_runner.go:130] > # 	"operations_total",
	I1202 19:18:05.986782   40272 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 19:18:05.986787   40272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 19:18:05.986793   40272 command_runner.go:130] > # 	"operations_errors_total",
	I1202 19:18:05.986797   40272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 19:18:05.986802   40272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 19:18:05.986809   40272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 19:18:05.986814   40272 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 19:18:05.986819   40272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 19:18:05.986823   40272 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 19:18:05.986829   40272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 19:18:05.986836   40272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 19:18:05.986840   40272 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 19:18:05.986844   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986852   40272 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 19:18:05.986862   40272 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 19:18:05.986870   40272 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 19:18:05.986877   40272 command_runner.go:130] > # metrics_port = 9090
	I1202 19:18:05.986882   40272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 19:18:05.986886   40272 command_runner.go:130] > # metrics_socket = ""
	I1202 19:18:05.986893   40272 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 19:18:05.986899   40272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 19:18:05.986906   40272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 19:18:05.986918   40272 command_runner.go:130] > # certificate on any modification event.
	I1202 19:18:05.986933   40272 command_runner.go:130] > # metrics_cert = ""
	I1202 19:18:05.986939   40272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 19:18:05.986947   40272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 19:18:05.986950   40272 command_runner.go:130] > # metrics_key = ""
	I1202 19:18:05.986956   40272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 19:18:05.986962   40272 command_runner.go:130] > [crio.tracing]
	I1202 19:18:05.986967   40272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 19:18:05.986972   40272 command_runner.go:130] > # enable_tracing = false
	I1202 19:18:05.986979   40272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 19:18:05.986984   40272 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 19:18:05.986990   40272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 19:18:05.986997   40272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 19:18:05.987001   40272 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 19:18:05.987007   40272 command_runner.go:130] > [crio.nri]
	I1202 19:18:05.987011   40272 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 19:18:05.987015   40272 command_runner.go:130] > # enable_nri = true
	I1202 19:18:05.987019   40272 command_runner.go:130] > # NRI socket to listen on.
	I1202 19:18:05.987029   40272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 19:18:05.987033   40272 command_runner.go:130] > # NRI plugin directory to use.
	I1202 19:18:05.987037   40272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 19:18:05.987045   40272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 19:18:05.987050   40272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 19:18:05.987056   40272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 19:18:05.987116   40272 command_runner.go:130] > # nri_disable_connections = false
	I1202 19:18:05.987126   40272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 19:18:05.987130   40272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 19:18:05.987136   40272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 19:18:05.987142   40272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 19:18:05.987147   40272 command_runner.go:130] > # NRI default validator configuration.
	I1202 19:18:05.987157   40272 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 19:18:05.987166   40272 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 19:18:05.987170   40272 command_runner.go:130] > # can be restricted/rejected:
	I1202 19:18:05.987178   40272 command_runner.go:130] > # - OCI hook injection
	I1202 19:18:05.987186   40272 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 19:18:05.987191   40272 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 19:18:05.987196   40272 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 19:18:05.987203   40272 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 19:18:05.987209   40272 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 19:18:05.987216   40272 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 19:18:05.987225   40272 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 19:18:05.987230   40272 command_runner.go:130] > #
	I1202 19:18:05.987234   40272 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 19:18:05.987239   40272 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 19:18:05.987245   40272 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 19:18:05.987254   40272 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 19:18:05.987260   40272 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 19:18:05.987268   40272 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 19:18:05.987279   40272 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 19:18:05.987283   40272 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 19:18:05.987286   40272 command_runner.go:130] > # ]
	I1202 19:18:05.987291   40272 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 19:18:05.987299   40272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 19:18:05.987302   40272 command_runner.go:130] > [crio.stats]
	I1202 19:18:05.987308   40272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 19:18:05.987316   40272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 19:18:05.987320   40272 command_runner.go:130] > # stats_collection_period = 0
	I1202 19:18:05.987326   40272 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 19:18:05.987334   40272 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 19:18:05.987344   40272 command_runner.go:130] > # collection_period = 0
	I1202 19:18:05.987392   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941536561Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 19:18:05.987405   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941573139Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 19:18:05.987421   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941598771Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 19:18:05.987431   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941629007Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 19:18:05.987447   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.94184771Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.987460   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.942236436Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 19:18:05.987477   40272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 19:18:05.987606   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:05.987620   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:05.987644   40272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:18:05.987670   40272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:18:05.987799   40272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:18:05.987877   40272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:18:05.995250   40272 command_runner.go:130] > kubeadm
	I1202 19:18:05.995271   40272 command_runner.go:130] > kubectl
	I1202 19:18:05.995276   40272 command_runner.go:130] > kubelet
	I1202 19:18:05.995308   40272 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:18:05.995379   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:18:06.002605   40272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:18:06.015240   40272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:18:06.033933   40272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:18:06.047469   40272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:18:06.051453   40272 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 19:18:06.051580   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:06.161840   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:06.543709   40272 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:18:06.543774   40272 certs.go:195] generating shared ca certs ...
	I1202 19:18:06.543803   40272 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:06.543968   40272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:18:06.544037   40272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:18:06.544058   40272 certs.go:257] generating profile certs ...
	I1202 19:18:06.544203   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:18:06.544311   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:18:06.544381   40272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:18:06.544424   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:18:06.544458   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:18:06.544493   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:18:06.544537   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:18:06.544570   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:18:06.544599   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:18:06.544648   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:18:06.544683   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:18:06.544773   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:18:06.544828   40272 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:18:06.544854   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:18:06.544932   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:18:06.551062   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:18:06.551141   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:18:06.551220   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:06.551261   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.551291   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.551312   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.552213   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:18:06.569384   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:18:06.587883   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:18:06.609527   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:18:06.628039   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:18:06.644623   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:18:06.662478   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:18:06.679440   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:18:06.696330   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:18:06.713584   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:18:06.731033   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:18:06.747714   40272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:18:06.761265   40272 ssh_runner.go:195] Run: openssl version
	I1202 19:18:06.766652   40272 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 19:18:06.767017   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:18:06.774639   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.777834   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778051   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778107   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.818127   40272 command_runner.go:130] > b5213941
	I1202 19:18:06.818625   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:18:06.826391   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:18:06.834719   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838324   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838367   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838418   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.878978   40272 command_runner.go:130] > 51391683
	I1202 19:18:06.879420   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:18:06.887230   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:18:06.895470   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899261   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899287   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899335   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.940199   40272 command_runner.go:130] > 3ec20f2e
	I1202 19:18:06.940694   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:18:06.948359   40272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951793   40272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951816   40272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 19:18:06.951822   40272 command_runner.go:130] > Device: 259,1	Inode: 1315539     Links: 1
	I1202 19:18:06.951851   40272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:06.951865   40272 command_runner.go:130] > Access: 2025-12-02 19:13:58.595474405 +0000
	I1202 19:18:06.951871   40272 command_runner.go:130] > Modify: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951876   40272 command_runner.go:130] > Change: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951881   40272 command_runner.go:130] >  Birth: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951960   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:18:06.996850   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:06.997318   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:18:07.037433   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.037885   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:18:07.078161   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.078666   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:18:07.119364   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.119441   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:18:07.159628   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.160136   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:18:07.204176   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.204662   40272 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:07.204768   40272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:18:07.204851   40272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:18:07.233427   40272 cri.go:89] found id: ""
	I1202 19:18:07.233514   40272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:18:07.240330   40272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 19:18:07.240352   40272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 19:18:07.240359   40272 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 19:18:07.241346   40272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:18:07.241363   40272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:18:07.241437   40272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:18:07.248549   40272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:18:07.248941   40272 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249040   40272 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "functional-374330" cluster setting kubeconfig missing "functional-374330" context setting]
	I1202 19:18:07.249312   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.249749   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249896   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.250443   40272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:18:07.250467   40272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:18:07.250474   40272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:18:07.250478   40272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:18:07.250487   40272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:18:07.250526   40272 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:18:07.250793   40272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:18:07.258519   40272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:18:07.258557   40272 kubeadm.go:602] duration metric: took 17.188352ms to restartPrimaryControlPlane
	I1202 19:18:07.258569   40272 kubeadm.go:403] duration metric: took 53.913832ms to StartCluster
	I1202 19:18:07.258583   40272 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.258647   40272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.259281   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.259482   40272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:18:07.259876   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:07.259927   40272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:18:07.259993   40272 addons.go:70] Setting storage-provisioner=true in profile "functional-374330"
	I1202 19:18:07.260007   40272 addons.go:239] Setting addon storage-provisioner=true in "functional-374330"
	I1202 19:18:07.260034   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.260061   40272 addons.go:70] Setting default-storageclass=true in profile "functional-374330"
	I1202 19:18:07.260107   40272 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-374330"
	I1202 19:18:07.260433   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.260513   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.266365   40272 out.go:179] * Verifying Kubernetes components...
	I1202 19:18:07.269343   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:07.293348   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.293507   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.293796   40272 addons.go:239] Setting addon default-storageclass=true in "functional-374330"
	I1202 19:18:07.293827   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.294253   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.304761   40272 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:18:07.307700   40272 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.307724   40272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:18:07.307789   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.332842   40272 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:07.332860   40272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:18:07.332914   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.347890   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.373144   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.469482   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:07.472955   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.515784   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.293178   40272 node_ready.go:35] waiting up to 6m0s for node "functional-374330" to be "Ready" ...
	I1202 19:18:08.293301   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.293355   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.293568   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293595   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293615   40272 retry.go:31] will retry after 144.187129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293684   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293702   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293710   40272 retry.go:31] will retry after 132.365923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.427169   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.438559   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.510555   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513791   40272 retry.go:31] will retry after 461.570102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513742   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513825   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513833   40272 retry.go:31] will retry after 354.67857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.794133   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.794203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.868974   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.929070   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.932369   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.932402   40272 retry.go:31] will retry after 765.19043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.975575   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.036469   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.042296   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.042376   40272 retry.go:31] will retry after 433.124039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.293618   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.293713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:09.476440   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.538441   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.541412   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.541444   40272 retry.go:31] will retry after 747.346338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.698768   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:09.764666   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.764703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.764723   40272 retry.go:31] will retry after 541.76994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.793827   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.793965   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.794261   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:10.289986   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:10.293340   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.293732   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:10.293780   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:10.307063   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:10.373573   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.373608   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.373627   40272 retry.go:31] will retry after 1.037281057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388739   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.388813   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388864   40272 retry.go:31] will retry after 1.072570226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.794280   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.794348   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.794651   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.293375   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.293466   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.293739   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.411088   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:11.462503   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:11.470558   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.470603   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.470624   40272 retry.go:31] will retry after 2.459470693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530455   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.530510   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530529   40272 retry.go:31] will retry after 2.35440359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.794013   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.794477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:12.294194   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.294271   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:12.294648   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:12.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.793567   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.793595   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.793686   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.794006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.885433   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:13.930854   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:13.940303   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:13.943330   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:13.943359   40272 retry.go:31] will retry after 2.562469282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000907   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:14.000951   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000969   40272 retry.go:31] will retry after 3.172954134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.294316   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.294381   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:14.793366   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.793435   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.793778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:14.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:15.293495   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:15.793590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.793675   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.794004   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.293435   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.506093   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:16.576298   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:16.580372   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.580403   40272 retry.go:31] will retry after 6.193423377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.793925   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.794050   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:16.794410   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:17.174990   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:17.234065   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:17.234161   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.234184   40272 retry.go:31] will retry after 6.017051757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.293565   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.293640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:17.793940   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.794318   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.294120   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.294191   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.294497   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.794258   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.794341   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.794641   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:18.794693   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:19.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:19.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.793693   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.794032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.293712   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.793838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:21.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:21.293929   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:21.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.293417   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.774666   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:22.793983   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.794053   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.835259   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:22.835293   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:22.835313   40272 retry.go:31] will retry after 8.891499319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.251502   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:23.293920   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.293995   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.294305   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:23.294361   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:23.316803   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:23.325390   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.325420   40272 retry.go:31] will retry after 5.436174555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.794140   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.794209   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.794514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.294165   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.294234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.294532   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.794307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.794552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:25.294405   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.294476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.294786   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:25.294838   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:25.793518   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.793593   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.793954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.293881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.793441   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.793515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.793898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.293636   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.294038   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.793924   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.793994   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.794242   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:27.794290   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:28.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.294085   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.294398   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.762126   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:28.793717   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.794058   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.820417   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:28.820461   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:28.820480   40272 retry.go:31] will retry after 5.23527752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:29.294048   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.294387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:29.794183   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.794303   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.794634   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:29.794706   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:30.294267   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.294340   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.294624   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:30.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.793398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.793762   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.293841   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.727474   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:31.785329   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:31.788538   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.788571   40272 retry.go:31] will retry after 14.027342391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.793764   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.793834   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.794170   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:32.293926   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.293991   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.294245   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:32.294283   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:32.794305   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.794380   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.794731   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.293682   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.294006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:34.056328   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:34.114988   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:34.115034   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.115053   40272 retry.go:31] will retry after 20.825216377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.294372   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.294768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:34.294823   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:34.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.293815   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.293900   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.294151   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.793855   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.793935   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.794205   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.293483   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.793564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.793873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:36.793925   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:37.293668   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.293762   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.294075   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:37.793947   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.794293   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.294087   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.294335   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.794481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:38.794533   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:39.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.294563   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:39.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.794411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.794661   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.793560   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.793636   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:41.293642   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:41.294091   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:41.793737   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.793809   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.794119   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:42.294249   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.294351   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.295481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1202 19:18:42.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.794309   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.794549   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:43.294307   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.294779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:43.294833   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:43.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.793526   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.293539   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.293609   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.293775   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.294288   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.794074   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.794139   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:45.794427   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:45.816754   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:45.885215   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:45.888326   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:45.888364   40272 retry.go:31] will retry after 11.821193731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:46.293908   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.293987   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.294332   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:46.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.794188   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.794450   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.294325   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.294656   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.793465   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:48.293461   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.293549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:48.293980   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:48.793521   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.793585   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.793925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.293671   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.293755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.294085   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.793786   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.793857   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.794203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:50.293936   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.294005   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.294362   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:50.794095   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.794170   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.794494   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.294326   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.294720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:52.793945   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:53.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.293667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.293927   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:53.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.793852   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.794188   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.294005   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.294075   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.294426   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.794205   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.794284   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.794553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:54.794600   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:54.941002   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:55.004086   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:55.004129   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.004148   40272 retry.go:31] will retry after 20.918145005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.293488   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.293564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.293885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:55.793617   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.793707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.794018   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.293767   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.793648   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.793755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.794090   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:57.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.293891   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.294211   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:57.294263   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:57.710107   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:57.765891   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:57.765928   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.765947   40272 retry.go:31] will retry after 13.115816401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.793988   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.794063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.794301   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.294217   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.793430   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.793738   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.293442   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.293550   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.793871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:59.793930   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:00.295673   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.295757   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.296162   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:00.793971   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.794393   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.294295   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.294639   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.793817   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:02.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:02.293931   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:02.793522   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.793600   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.293690   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.293758   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.294007   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.793884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:04.293572   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:04.294031   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:04.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.793792   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.793473   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.793568   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.793916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.293673   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.293971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.793528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:06.793897   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:07.293734   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.293806   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.294152   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:07.793956   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.794035   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.794289   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.294051   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.294130   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.294477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.794232   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.794588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:08.794644   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:09.294344   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.294413   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.294705   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:09.793394   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.882157   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:10.938212   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:10.938272   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:10.938296   40272 retry.go:31] will retry after 16.990081142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:11.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.293533   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:11.293912   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:11.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.793893   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.293805   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.793829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:13.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.293887   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:13.293939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:13.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.793901   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.293451   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.293545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.793538   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.793612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.793947   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.293500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.293781   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:15.793881   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:15.923138   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:15.976380   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:15.979446   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:15.979475   40272 retry.go:31] will retry after 43.938975662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:16.293891   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.293966   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.294319   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:16.793918   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.794007   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.794273   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.293817   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.293889   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.294222   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.794224   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.794322   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.794659   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:17.794718   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:18.293644   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.293745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:18.793819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.793896   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.794214   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.294047   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.294429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.794155   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.794251   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.794516   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:20.294336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.294409   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.294750   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:20.294804   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:20.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.293392   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.793880   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.793814   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.794072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:22.794110   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:23.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.293552   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:23.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.793520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.293676   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.793402   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.793777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:25.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:25.293933   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:25.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.793822   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.293870   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.794001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.293786   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.293876   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:27.294188   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:27.794144   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.794229   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.928884   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:27.980862   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983877   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983967   40272 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:28.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.293635   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.293939   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:28.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.293888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:29.793943   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:30.293604   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.293690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.293949   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:30.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.793541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.793879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.293681   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.294045   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.793596   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:31.793973   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:32.293633   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.293736   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.294100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:32.794048   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.794127   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.794454   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.294107   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.294193   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.294469   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.794161   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.794241   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.794576   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:33.794630   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:34.294318   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.294390   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.294756   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:34.793348   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.793816   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.293934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.793853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:36.293403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.293796   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:36.293849   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:36.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.793604   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.793910   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.293819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.293921   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.294237   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.793992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.794062   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.794317   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:38.294129   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.294219   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.294552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:38.294607   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:38.794375   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.794449   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.794753   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.293464   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.793609   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.793726   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.793971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:40.794046   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:41.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.293783   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.294101   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:41.793762   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.793835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.794208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.293532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.793895   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.793974   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.794274   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:42.794330   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:43.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.293536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:43.793403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.793470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.793794   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.793570   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.793981   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:45.293992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.294153   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.294968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:45.295095   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:45.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.793517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.293433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.793672   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.794005   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.294181   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.794191   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.794264   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.794574   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:47.794634   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:48.294351   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.294414   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.294658   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:48.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.793458   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.293548   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.293622   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.793638   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.793723   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.793982   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:50.293669   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.293738   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.294063   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:50.294115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:50.793649   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.794030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.293404   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.293477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.793444   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.293605   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.293689   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.794056   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.794307   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:52.794355   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:53.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.294542   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:53.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.794789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.293367   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.293448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.793399   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:55.293465   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.293912   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:55.293970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:55.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.793748   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.293378   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.293444   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.293784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.793485   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:57.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.293823   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:57.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:57.794072   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.794142   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.294203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.294515   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.794402   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.794662   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.293346   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.293443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.793412   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:59.793894   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:59.919155   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:59.978732   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978768   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978842   40272 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:59.981270   40272 out.go:179] * Enabled addons: 
	I1202 19:19:59.984008   40272 addons.go:530] duration metric: took 1m52.724080055s for enable addons: enabled=[]
	I1202 19:20:00.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.319155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=25
	I1202 19:20:00.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.793581   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.293643   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.294269   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.794085   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:01.794475   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:02.294283   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.294801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:02.793839   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.793918   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.794224   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.293780   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.293848   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.294097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.793818   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.793890   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.794190   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:04.294069   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.294138   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.294439   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:04.294488   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:04.794180   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.794261   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.794525   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.294270   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.294339   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.294637   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.793358   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.793447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.793770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.794145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:06.794195   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:07.293975   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.294054   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.294413   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:07.794308   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.794425   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.794772   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.293671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.294020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:09.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.293769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:09.293828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:09.794253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.794326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.794686   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:11.293475   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.293548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:11.293934   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:11.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.293544   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.293610   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.293915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.793833   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.793916   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.794241   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:13.293799   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.293872   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.294179   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:13.294238   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:13.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.794022   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.794276   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.294026   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.294105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.294453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.794135   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.794207   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:15.294253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.294326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:15.294638   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:15.793355   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.793426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.793551   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.793621   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.293774   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.293867   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.794117   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.794213   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.794539   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:17.794594   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:18.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.294374   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:18.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.794070   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:20.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.293900   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:20.293961   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:20.793436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.293924   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.793463   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.793956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.293478   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.793771   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:22.793827   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:23.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:23.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.293436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.293506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:24.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:25.293608   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.293707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.294025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:25.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.794022   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:26.794082   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:27.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.293785   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.294032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:27.793959   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.294157   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.294237   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.294582   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.794354   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.794429   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.794706   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:28.794758   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:29.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:29.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.293432   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.293782   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.793582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:31.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.293580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:31.293985   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:31.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.793797   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.793874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.794194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:33.293954   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.294018   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.294268   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:33.294307   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:33.794022   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.794093   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.794394   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.294075   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.294145   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.294479   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.794081   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.794161   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.794411   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:35.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.294307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.294631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:35.294684   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:35.794291   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.794361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.794710   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.294383   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.294672   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.793869   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.293817   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.294175   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.794113   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.794365   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:37.794404   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:38.294151   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.294567   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:38.794364   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.794441   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.794795   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.794051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:40.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.293749   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:40.294131   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:40.793755   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.794137   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.293804   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.293874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.294208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.794044   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.794437   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:42.294271   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.294354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.294638   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:42.294682   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:42.793464   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.293529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.293884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.793555   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.793904   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.293677   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.793724   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.793796   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:44.794158   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:45.293768   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.293839   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.294135   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:45.794039   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.294279   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.294679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.793388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.793455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:47.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.293786   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.294051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:47.294093   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:47.794031   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.794101   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.294153   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.294227   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.294472   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.794239   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.794680   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.293461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.293815   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.793404   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.793801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:49.793850   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:50.293494   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.293926   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:50.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.793579   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.293925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.794124   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:51.794181   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:52.293850   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.293930   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.294277   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:52.794083   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.794149   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.794406   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.294121   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.294195   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.294529   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.794350   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.794679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:53.794733   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:54.293471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.293541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:54.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:56.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.293455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:56.293831   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:56.793498   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.793574   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.793934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.293700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.293941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.793858   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.793928   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.794244   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:58.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.294083   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.294416   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:58.294470   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:58.794152   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.794222   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.794483   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.294312   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.294645   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.794292   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.794364   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.794674   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.293476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.293799   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.793832   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:00.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:01.293577   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:01.793727   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.793804   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.293823   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.293903   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.294253   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.794285   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.794354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.794650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:02.794701   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:03.293400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.293470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:03.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.293824   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.793783   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:05.293327   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.293398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:05.293767   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:05.794396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.794464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.794774   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.293683   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.793543   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:07.293810   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.293905   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.294228   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:07.294294   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:07.794228   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.794296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.794557   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.294314   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.294391   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.294721   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.793513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.293515   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.793507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.793849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:09.793915   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:10.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.293946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:10.793633   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.793713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.794014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.293862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.293767   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:12.293819   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:12.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.293560   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.293641   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:14.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.293853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:14.293920   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:14.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.293520   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.293586   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.793540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.793613   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:16.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.293615   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:16.293998   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:16.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.293689   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.293770   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.793898   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.793968   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.794294   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:18.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.294082   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.294374   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:18.294428   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:18.794173   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.794258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.794584   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.294375   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.294447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.294755   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.793492   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.793769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.793542   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.793614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.793957   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:20.794013   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:21.293675   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.293740   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:21.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.293837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.793766   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.793836   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.794155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:22.794204   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:23.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:23.793615   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.794078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.793860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:25.293571   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.293642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.293963   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:25.294010   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:25.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.793479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.793840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.793506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:27.293759   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.294093   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:27.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:27.794030   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.794105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.794432   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.294126   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.294546   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.794342   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.794587   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.293336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.793558   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:29.794070   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:30.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.293704   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:30.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.793500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:32.293467   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.293899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:32.293955   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:32.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.793527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.293566   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.293634   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.793481   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.793759   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:34.793805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:35.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.293507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:35.793599   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.793691   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.293780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.793879   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.793947   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.794270   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:36.794327   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:37.294002   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.294382   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:37.794293   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.794366   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.794623   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.293793   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.793479   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.793551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.793911   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:39.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:39.293900   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:39.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.793400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.793469   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.293410   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.293820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.793779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:41.793832   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:42.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:42.793809   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.793881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.794230   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.794300   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.794607   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:43.794654   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:44.294246   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.294318   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:44.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.793399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.793724   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.793836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:46.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.293848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:46.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:46.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.793766   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.293717   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.294035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.793981   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.794397   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:48.293997   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.294340   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:48.294384   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:48.794112   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.794192   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.794535   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.294292   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.794401   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.794648   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.293343   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.293431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.293749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.793332   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.793431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.793733   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:50.793781   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:51.294382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.294749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:51.794404   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.794484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.794827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.793741   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.794061   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:52.794098   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:53.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.293502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.293842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:53.793547   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.793619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.293686   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.293772   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:55.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.293522   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:55.293916   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:55.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.793966   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.793700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.794037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:57.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.293812   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.294147   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:57.294199   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:57.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.794029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.794360   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.294144   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.294215   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.294530   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.794311   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.794384   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.794669   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.293382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.293457   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.793915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:59.793970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:00.294203   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.294291   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:00.794373   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.794448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.794765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.793408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:02.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.293521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.293831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:02.293882   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:02.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.793524   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.294092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.793779   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.793863   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:04.294013   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.294096   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.294427   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:04.294479   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:04.794192   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.794518   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.294290   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.294361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.294692   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.293537   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.293889   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.793886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:06.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:07.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.293561   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:07.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.794431   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.294315   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.793325   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.793395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:09.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:09.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:09.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.793938   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.293512   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.293605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.293914   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.793473   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:11.293419   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:11.293911   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:11.793571   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.793667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.793998   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.293707   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.294044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.794038   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.794457   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:13.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.294294   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.294608   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:13.294662   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:13.793319   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.793385   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.793631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.293401   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.793974   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.293634   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.293715   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.294019   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.793580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.793905   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:15.793957   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:16.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.293753   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.294105   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:16.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.794139   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.294035   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.294104   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.294447   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.794420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.794500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.794802   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:17.794864   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:18.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:18.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.793908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.793487   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:20.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:20.294043   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:20.793747   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.793818   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.293829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.294078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.793486   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.293599   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.293684   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.293961   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.793847   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.793919   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.794173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:22.794221   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:23.294004   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.294391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:23.794182   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.794569   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.294310   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.294382   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.294678   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:25.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.293849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:25.293899   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:25.793411   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.793784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.293511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:27.293716   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.293790   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:27.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:27.794020   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.794114   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.294228   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.294302   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.294604   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.794372   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.794442   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.793369   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.793452   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.793775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:29.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:30.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:30.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.793820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.293618   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.293975   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.793639   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.793724   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.794026   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:31.794076   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:32.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.293867   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:32.793458   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.793534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.293479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.293808   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.793577   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:34.293638   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.293733   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.294053   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:34.294138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:34.793757   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.794123   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.293805   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.293875   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.294212   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.793796   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.793870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.794183   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:36.293916   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.293981   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.294225   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:36.294266   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:36.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.794051   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.794349   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.294147   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.294225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.294553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.794437   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.794726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.293504   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.793561   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.793979   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:38.794037   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:39.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.293812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:39.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.793508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.293825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.793461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.793725   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:41.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:41.293919   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:41.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.306206   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.306286   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.306588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.793842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:43.293564   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:43.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:43.793719   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.794033   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.293420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.293840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.794225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.794573   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.293335   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.293432   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.293823   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.793584   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.793699   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.794020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:45.794077   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:46.293765   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.294194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:46.793979   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.294352   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.294421   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.294757   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.793514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:48.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.293488   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:48.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:48.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.793896   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.793746   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.794140   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:50.293958   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.294029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.294356   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:50.794160   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.794234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.794577   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.294330   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.294654   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.793400   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.293818   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.793765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:52.793817   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:53.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:53.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.793594   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.793990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.293543   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.293619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.293933   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.793885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:54.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:55.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.293897   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:55.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.793627   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.293469   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.293845   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.793575   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.793643   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.793943   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:56.793996   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:57.293776   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.293861   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:57.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.794158   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.294275   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.294346   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.294665   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.793386   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.793763   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:59.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.293903   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:59.293962   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:59.793451   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.793525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.296332   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.296406   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.296694   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.293498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.793424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:01.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:02.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.293637   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.294144   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:02.793976   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.794047   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.294017   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.294088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.294379   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.794118   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.794444   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:03.794495   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:04.294106   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.294176   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.294496   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:04.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.794365   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.794711   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.793605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.793941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:06.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.293719   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.294067   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:06.294117   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:06.793866   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.793938   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.293887   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.293967   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.294287   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.794150   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.794403   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:08.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.294258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.294594   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:08.294647   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:08.793335   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.793404   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.793760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.793478   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.293956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.793532   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.793599   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:10.793903   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:11.293547   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.293625   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:11.793691   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.793764   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.794076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.793673   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.794066   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:12.794115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:13.293795   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.293870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.294207   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:13.793969   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.794283   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.294039   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.294109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.294436   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.794094   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.794171   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.794488   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:14.794541   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:15.294282   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.294357   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.294611   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:15.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.794443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.794770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.293836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.793477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:17.293700   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:17.294109   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:17.793903   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.793973   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.794593   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.294328   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.294646   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.793322   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.793392   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.793726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.793807   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:19.793870   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:20.293525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.293596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:20.793525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.793601   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.793946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.293705   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.294002   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.793707   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.793780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.794097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:21.794151   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:22.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.293892   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.294246   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:22.794023   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.794088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.794347   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.294098   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.294169   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.294495   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.794344   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.794436   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.794764   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:23.794818   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:24.293402   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.293471   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:24.793418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.793495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.293624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.293973   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.793669   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.793735   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.793985   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:26.293681   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.293789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.294111   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:26.294163   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:26.793710   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.793789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.794114   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.293843   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.293914   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.294239   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.794080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.794155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.794487   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:28.294258   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.294337   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.294650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:28.294705   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:28.793349   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.793701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.294241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.294701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.293509   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.293886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:30.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:31.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:31.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.293492   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.293560   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:33.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.293569   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:33.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:33.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.293678   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.294103   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.793774   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.793844   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.794094   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:35.293808   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.293879   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.294203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:35.294261   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:35.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.794103   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.294141   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.294296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.794385   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.794791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.293721   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.293800   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.294132   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.794036   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.794297   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:37.794344   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:38.294080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.294155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.294482   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:38.794270   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.794347   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.794663   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.293411   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.793476   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.793548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.793865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:40.293455   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.293907   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:40.293963   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:40.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.293444   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.293898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.793891   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.793960   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:42.794326   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:43.294061   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.294133   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.294467   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:43.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.794316   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.294331   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.294411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.294778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.793422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:45.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.293631   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:45.293977   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:45.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.793835   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.293534   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.293612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.294003   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.793541   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.793611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.793878   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:47.293767   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.293837   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.294173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:47.294229   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:47.794221   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.293486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.293760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.293446   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.293944   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.793512   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:49.793918   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:50.293594   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.293685   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.294016   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:50.793739   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.293812   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.293881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.294164   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.793945   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.794024   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.794370   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:51.794425   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:52.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.294180   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.294514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:52.794387   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.794468   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.794736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.793588   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.793680   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.794035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:54.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:54.293865   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:54.793520   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.793596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.793859   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:56.293555   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.293632   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:56.294027   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:56.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.293744   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.293822   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.794034   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.794429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:58.294164   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.294240   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.294551   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:58.294605   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:58.794324   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.794395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.794640   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.293351   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.293426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.293726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.793529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:00.301671   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.301760   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.302092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:00.302138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:00.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.293581   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.293683   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.294068   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.293633   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.293968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.793760   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.793866   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.794174   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:02.794228   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:03.293986   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.294063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.296865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1202 19:24:03.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.793994   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.293692   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.293763   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.793833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:05.293536   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.293614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:05.294030   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:05.793675   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.794044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.293762   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.293838   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.794391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:07.294030   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.294116   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.298234   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1202 19:24:07.301805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:07.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.794025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:08.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:24:08.293509   40272 node_ready.go:38] duration metric: took 6m0.000285031s for node "functional-374330" to be "Ready" ...
	I1202 19:24:08.296878   40272 out.go:203] 
	W1202 19:24:08.299748   40272 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:24:08.299768   40272 out.go:285] * 
	W1202 19:24:08.301915   40272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:24:08.304698   40272 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697270209Z" level=info msg="Using the internal default seccomp profile"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697279554Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697285322Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697291254Z" level=info msg="RDT not available in the host system"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.697303439Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.698232732Z" level=info msg="Conmon does support the --sync option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.69825349Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.698268071Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699067635Z" level=info msg="Conmon does support the --sync option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699096049Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699220091Z" level=info msg="Updated default CNI network name to "
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.699755735Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\
"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_liste
n = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.700119043Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.700176732Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746773976Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746817643Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746864067Z" level=info msg="Create NRI interface"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746984392Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.746994263Z" level=info msg="runtime interface created"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747006185Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747012371Z" level=info msg="runtime interface starting up..."
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747021052Z" level=info msg="starting plugins..."
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747034639Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:18:05 functional-374330 crio[6021]: time="2025-12-02T19:18:05.747104447Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:18:05 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:24:12.784276    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:12.785101    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:12.786737    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:12.787280    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:12.788909    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:24:12 up  1:06,  0 user,  load average: 0.12, 0.21, 0.33
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:24:10 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 02 19:24:11 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:11 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:11 functional-374330 kubelet[9282]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:11 functional-374330 kubelet[9282]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:11 functional-374330 kubelet[9282]: E1202 19:24:11.131263    9282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 02 19:24:11 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:11 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:11 functional-374330 kubelet[9315]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:11 functional-374330 kubelet[9315]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:11 functional-374330 kubelet[9315]: E1202 19:24:11.844468    9315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:11 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:12 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 815.
	Dec 02 19:24:12 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:12 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:12 functional-374330 kubelet[9357]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:12 functional-374330 kubelet[9357]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:12 functional-374330 kubelet[9357]: E1202 19:24:12.606964    9357 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:12 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:12 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (338.651912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 kubectl -- --context functional-374330 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 kubectl -- --context functional-374330 get pods: exit status 1 (103.625191ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-374330 kubectl -- --context functional-374330 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (319.093363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image ls --format yaml --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                            │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                              │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:latest                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add minikube-local-cache-test:functional-374330                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache delete minikube-local-cache-test:functional-374330                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl images                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ cache          │ functional-374330 cache reload                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ kubectl        │ functional-374330 kubectl -- --context functional-374330 get pods                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:18:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:18:02.458749   40272 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:18:02.458868   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.458880   40272 out.go:374] Setting ErrFile to fd 2...
	I1202 19:18:02.458886   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.459160   40272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:18:02.459549   40272 out.go:368] Setting JSON to false
	I1202 19:18:02.460340   40272 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3621,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:18:02.460405   40272 start.go:143] virtualization:  
	I1202 19:18:02.464020   40272 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:18:02.467892   40272 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:18:02.467969   40272 notify.go:221] Checking for updates...
	I1202 19:18:02.474021   40272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:18:02.477064   40272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:02.480130   40272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:18:02.483164   40272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:18:02.486142   40272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:18:02.489587   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:02.489732   40272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:18:02.527318   40272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:18:02.527492   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.584790   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.575369586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.584902   40272 docker.go:319] overlay module found
	I1202 19:18:02.588038   40272 out.go:179] * Using the docker driver based on existing profile
	I1202 19:18:02.590861   40272 start.go:309] selected driver: docker
	I1202 19:18:02.590885   40272 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.591008   40272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:18:02.591102   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.644457   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.635623623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.644867   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:02.644933   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:02.644976   40272 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.648222   40272 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:18:02.651050   40272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:18:02.654072   40272 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:18:02.657154   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:02.657223   40272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:18:02.676274   40272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:18:02.676298   40272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:18:02.730421   40272 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:18:02.934277   40272 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:18:02.934463   40272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:18:02.934535   40272 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934623   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:18:02.934634   40272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.203µs
	I1202 19:18:02.934648   40272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:18:02.934660   40272 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934690   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:18:02.934695   40272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.324µs
	I1202 19:18:02.934701   40272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934707   40272 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:18:02.934711   40272 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934738   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:18:02.934736   40272 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934743   40272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.525µs
	I1202 19:18:02.934750   40272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934759   40272 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934774   40272 start.go:364] duration metric: took 25.468µs to acquireMachinesLock for "functional-374330"
	I1202 19:18:02.934787   40272 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:18:02.934789   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:18:02.934792   40272 fix.go:54] fixHost starting: 
	I1202 19:18:02.934794   40272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 35.864µs
	I1202 19:18:02.934800   40272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934809   40272 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934834   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:18:02.934845   40272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 19:18:02.934851   40272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934859   40272 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934885   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:18:02.934890   40272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.983µs
	I1202 19:18:02.934895   40272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:18:02.934913   40272 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934941   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:18:02.934946   40272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.707µs
	I1202 19:18:02.934951   40272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:18:02.934960   40272 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934985   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:18:02.934990   40272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.646µs
	I1202 19:18:02.934995   40272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:18:02.935015   40272 cache.go:87] Successfully saved all images to host disk.
	I1202 19:18:02.935074   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:02.953213   40272 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:18:02.953249   40272 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:18:02.956557   40272 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:18:02.956597   40272 machine.go:94] provisionDockerMachine start ...
	I1202 19:18:02.956677   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:02.973977   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:02.974301   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:02.974316   40272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:18:03.125393   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.125419   40272 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:18:03.125485   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.143103   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.143432   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.143449   40272 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:18:03.303153   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.303231   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.322823   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.323149   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.323170   40272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:18:03.473999   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:18:03.474027   40272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:18:03.474048   40272 ubuntu.go:190] setting up certificates
	I1202 19:18:03.474072   40272 provision.go:84] configureAuth start
	I1202 19:18:03.474137   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:03.492443   40272 provision.go:143] copyHostCerts
	I1202 19:18:03.492497   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492535   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:18:03.492553   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492631   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:18:03.492733   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492755   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:18:03.492763   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492791   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:18:03.492852   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492873   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:18:03.492880   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492905   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:18:03.492966   40272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:18:03.672249   40272 provision.go:177] copyRemoteCerts
	I1202 19:18:03.672315   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:18:03.672360   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.690216   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:03.793601   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:18:03.793730   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:18:03.811690   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:18:03.811788   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:18:03.829853   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:18:03.829937   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:18:03.847063   40272 provision.go:87] duration metric: took 372.963339ms to configureAuth
	I1202 19:18:03.847135   40272 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:18:03.847323   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:03.847434   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.865504   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.865829   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.865845   40272 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:18:04.201120   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:18:04.201145   40272 machine.go:97] duration metric: took 1.244539118s to provisionDockerMachine
	I1202 19:18:04.201156   40272 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:18:04.201184   40272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:18:04.201288   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:18:04.201334   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.219464   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.321684   40272 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:18:04.325089   40272 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 19:18:04.325149   40272 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 19:18:04.325168   40272 command_runner.go:130] > VERSION_ID="12"
	I1202 19:18:04.325186   40272 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 19:18:04.325207   40272 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 19:18:04.325237   40272 command_runner.go:130] > ID=debian
	I1202 19:18:04.325255   40272 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 19:18:04.325286   40272 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 19:18:04.325319   40272 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 19:18:04.325987   40272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:18:04.326040   40272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:18:04.326062   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:18:04.326146   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:18:04.326256   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:18:04.326282   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:18:04.326394   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:18:04.326431   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> /etc/test/nested/copy/4470/hosts
	I1202 19:18:04.326515   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:18:04.334852   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:04.354617   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:18:04.371951   40272 start.go:296] duration metric: took 170.764596ms for postStartSetup
	I1202 19:18:04.372028   40272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:18:04.372100   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.388603   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.485826   40272 command_runner.go:130] > 12%
	I1202 19:18:04.486229   40272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:18:04.490474   40272 command_runner.go:130] > 172G
	I1202 19:18:04.490820   40272 fix.go:56] duration metric: took 1.556023913s for fixHost
	I1202 19:18:04.490841   40272 start.go:83] releasing machines lock for "functional-374330", held for 1.55605912s
	I1202 19:18:04.490913   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:04.507171   40272 ssh_runner.go:195] Run: cat /version.json
	I1202 19:18:04.507212   40272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:18:04.507223   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.507284   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.524406   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.524835   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.718816   40272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 19:18:04.718877   40272 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 19:18:04.719015   40272 ssh_runner.go:195] Run: systemctl --version
	I1202 19:18:04.724818   40272 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 19:18:04.724852   40272 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 19:18:04.725306   40272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:18:04.761633   40272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 19:18:04.765941   40272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 19:18:04.765984   40272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:18:04.766036   40272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:18:04.775671   40272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:18:04.775697   40272 start.go:496] detecting cgroup driver to use...
	I1202 19:18:04.775733   40272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:18:04.775798   40272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:18:04.790690   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:18:04.805178   40272 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:18:04.805246   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:18:04.821173   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:18:04.835737   40272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:18:04.950984   40272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:18:05.087151   40272 docker.go:234] disabling docker service ...
	I1202 19:18:05.087235   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:18:05.103857   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:18:05.118486   40272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:18:05.244193   40272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:18:05.357860   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:18:05.370494   40272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:18:05.383221   40272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 19:18:05.384408   40272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:18:05.384504   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.393298   40272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:18:05.393384   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.402265   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.411107   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.420227   40272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:18:05.428585   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.437313   40272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.445677   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.454485   40272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:18:05.461070   40272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 19:18:05.462061   40272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:18:05.469806   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:05.580364   40272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:18:05.753810   40272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:18:05.753880   40272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:18:05.759122   40272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 19:18:05.759148   40272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 19:18:05.759155   40272 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 19:18:05.759163   40272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:05.759168   40272 command_runner.go:130] > Access: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759176   40272 command_runner.go:130] > Modify: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759183   40272 command_runner.go:130] > Change: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759187   40272 command_runner.go:130] >  Birth: -
	I1202 19:18:05.759949   40272 start.go:564] Will wait 60s for crictl version
	I1202 19:18:05.760004   40272 ssh_runner.go:195] Run: which crictl
	I1202 19:18:05.764137   40272 command_runner.go:130] > /usr/local/bin/crictl
	I1202 19:18:05.765127   40272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:18:05.790594   40272 command_runner.go:130] > Version:  0.1.0
	I1202 19:18:05.790618   40272 command_runner.go:130] > RuntimeName:  cri-o
	I1202 19:18:05.790833   40272 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 19:18:05.791045   40272 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 19:18:05.793417   40272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:18:05.793500   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.827591   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.827617   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.827624   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.827633   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.827640   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.827654   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.827661   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.827671   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.827679   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.827682   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.827686   40272 command_runner.go:130] >      static
	I1202 19:18:05.827702   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.827705   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.827713   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.827719   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.827727   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.827733   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.827740   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.827750   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.827762   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.829485   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.856217   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.856241   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.856248   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.856254   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.856260   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.856264   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.856268   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.856272   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.856277   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.856281   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.856285   40272 command_runner.go:130] >      static
	I1202 19:18:05.856288   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.856292   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.856297   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.856300   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.856307   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.856311   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.856315   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.856333   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.856342   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.862922   40272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:18:05.865574   40272 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:18:05.881617   40272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:18:05.885365   40272 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 19:18:05.885465   40272 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:18:05.885585   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:05.885631   40272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:18:05.915386   40272 command_runner.go:130] > {
	I1202 19:18:05.915407   40272 command_runner.go:130] >   "images":  [
	I1202 19:18:05.915412   40272 command_runner.go:130] >     {
	I1202 19:18:05.915425   40272 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 19:18:05.915430   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915436   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 19:18:05.915440   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915443   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915458   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 19:18:05.915465   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915469   40272 command_runner.go:130] >       "size":  "29035622",
	I1202 19:18:05.915474   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915478   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915484   40272 command_runner.go:130] >     },
	I1202 19:18:05.915487   40272 command_runner.go:130] >     {
	I1202 19:18:05.915494   40272 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 19:18:05.915501   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915507   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 19:18:05.915511   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915523   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915531   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 19:18:05.915535   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915542   40272 command_runner.go:130] >       "size":  "74488375",
	I1202 19:18:05.915547   40272 command_runner.go:130] >       "username":  "nonroot",
	I1202 19:18:05.915550   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915553   40272 command_runner.go:130] >     },
	I1202 19:18:05.915562   40272 command_runner.go:130] >     {
	I1202 19:18:05.915572   40272 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 19:18:05.915585   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915590   40272 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 19:18:05.915593   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915597   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915618   40272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 19:18:05.915626   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915630   40272 command_runner.go:130] >       "size":  "60854229",
	I1202 19:18:05.915634   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915637   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915641   40272 command_runner.go:130] >       },
	I1202 19:18:05.915645   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915652   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915661   40272 command_runner.go:130] >     },
	I1202 19:18:05.915666   40272 command_runner.go:130] >     {
	I1202 19:18:05.915681   40272 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 19:18:05.915686   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915691   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 19:18:05.915697   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915702   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915710   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 19:18:05.915713   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915718   40272 command_runner.go:130] >       "size":  "84947242",
	I1202 19:18:05.915721   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915725   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915728   40272 command_runner.go:130] >       },
	I1202 19:18:05.915736   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915743   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915746   40272 command_runner.go:130] >     },
	I1202 19:18:05.915750   40272 command_runner.go:130] >     {
	I1202 19:18:05.915756   40272 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 19:18:05.915762   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915771   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 19:18:05.915778   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915782   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915790   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 19:18:05.915797   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915805   40272 command_runner.go:130] >       "size":  "72167568",
	I1202 19:18:05.915809   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915813   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915816   40272 command_runner.go:130] >       },
	I1202 19:18:05.915820   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915824   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915828   40272 command_runner.go:130] >     },
	I1202 19:18:05.915831   40272 command_runner.go:130] >     {
	I1202 19:18:05.915841   40272 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 19:18:05.915852   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915858   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 19:18:05.915861   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915866   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915880   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 19:18:05.915883   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915887   40272 command_runner.go:130] >       "size":  "74105124",
	I1202 19:18:05.915891   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915896   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915902   40272 command_runner.go:130] >     },
	I1202 19:18:05.915906   40272 command_runner.go:130] >     {
	I1202 19:18:05.915912   40272 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 19:18:05.915917   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915925   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 19:18:05.915930   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915934   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915943   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 19:18:05.915949   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915953   40272 command_runner.go:130] >       "size":  "49819792",
	I1202 19:18:05.915961   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915968   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915972   40272 command_runner.go:130] >       },
	I1202 19:18:05.915976   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915982   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915988   40272 command_runner.go:130] >     },
	I1202 19:18:05.915992   40272 command_runner.go:130] >     {
	I1202 19:18:05.915999   40272 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 19:18:05.916003   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.916010   40272 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.916014   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916018   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.916027   40272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 19:18:05.916043   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916046   40272 command_runner.go:130] >       "size":  "517328",
	I1202 19:18:05.916049   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.916054   40272 command_runner.go:130] >         "value":  "65535"
	I1202 19:18:05.916064   40272 command_runner.go:130] >       },
	I1202 19:18:05.916068   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.916072   40272 command_runner.go:130] >       "pinned":  true
	I1202 19:18:05.916075   40272 command_runner.go:130] >     }
	I1202 19:18:05.916078   40272 command_runner.go:130] >   ]
	I1202 19:18:05.916081   40272 command_runner.go:130] > }
	I1202 19:18:05.916221   40272 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:18:05.916234   40272 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:18:05.916241   40272 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:18:05.916331   40272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:18:05.916421   40272 ssh_runner.go:195] Run: crio config
	I1202 19:18:05.964092   40272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 19:18:05.964119   40272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 19:18:05.964127   40272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 19:18:05.964130   40272 command_runner.go:130] > #
	I1202 19:18:05.964138   40272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 19:18:05.964149   40272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 19:18:05.964156   40272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 19:18:05.964166   40272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 19:18:05.964176   40272 command_runner.go:130] > # reload'.
	I1202 19:18:05.964182   40272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 19:18:05.964189   40272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 19:18:05.964197   40272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 19:18:05.964204   40272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 19:18:05.964210   40272 command_runner.go:130] > [crio]
	I1202 19:18:05.964216   40272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 19:18:05.964223   40272 command_runner.go:130] > # containers images, in this directory.
	I1202 19:18:05.964661   40272 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 19:18:05.964681   40272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 19:18:05.965195   40272 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 19:18:05.965213   40272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 19:18:05.965585   40272 command_runner.go:130] > # imagestore = ""
	I1202 19:18:05.965601   40272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 19:18:05.965614   40272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 19:18:05.966162   40272 command_runner.go:130] > # storage_driver = "overlay"
	I1202 19:18:05.966179   40272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 19:18:05.966186   40272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 19:18:05.966362   40272 command_runner.go:130] > # storage_option = [
	I1202 19:18:05.966573   40272 command_runner.go:130] > # ]
	I1202 19:18:05.966591   40272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 19:18:05.966598   40272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 19:18:05.966880   40272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 19:18:05.966894   40272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 19:18:05.966902   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 19:18:05.966914   40272 command_runner.go:130] > # always happen on a node reboot
	I1202 19:18:05.967066   40272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 19:18:05.967095   40272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 19:18:05.967102   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 19:18:05.967107   40272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 19:18:05.967213   40272 command_runner.go:130] > # version_file_persist = ""
	I1202 19:18:05.967225   40272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 19:18:05.967234   40272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 19:18:05.967423   40272 command_runner.go:130] > # internal_wipe = true
	I1202 19:18:05.967436   40272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 19:18:05.967449   40272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 19:18:05.967580   40272 command_runner.go:130] > # internal_repair = true
	I1202 19:18:05.967590   40272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 19:18:05.967596   40272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 19:18:05.967602   40272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 19:18:05.967753   40272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 19:18:05.967764   40272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 19:18:05.967767   40272 command_runner.go:130] > [crio.api]
	I1202 19:18:05.967773   40272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 19:18:05.967953   40272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 19:18:05.967969   40272 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 19:18:05.968134   40272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 19:18:05.968145   40272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 19:18:05.968169   40272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 19:18:05.968297   40272 command_runner.go:130] > # stream_port = "0"
	I1202 19:18:05.968307   40272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 19:18:05.968473   40272 command_runner.go:130] > # stream_enable_tls = false
	I1202 19:18:05.968483   40272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 19:18:05.968653   40272 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 19:18:05.968663   40272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 19:18:05.968669   40272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968775   40272 command_runner.go:130] > # stream_tls_cert = ""
	I1202 19:18:05.968785   40272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 19:18:05.968792   40272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968905   40272 command_runner.go:130] > # stream_tls_key = ""
	I1202 19:18:05.968915   40272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 19:18:05.968922   40272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 19:18:05.968926   40272 command_runner.go:130] > # automatically pick up the changes.
	I1202 19:18:05.969055   40272 command_runner.go:130] > # stream_tls_ca = ""
	I1202 19:18:05.969084   40272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969257   40272 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 19:18:05.969270   40272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969439   40272 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 19:18:05.969511   40272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 19:18:05.969528   40272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 19:18:05.969532   40272 command_runner.go:130] > [crio.runtime]
	I1202 19:18:05.969539   40272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 19:18:05.969544   40272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 19:18:05.969548   40272 command_runner.go:130] > # "nofile=1024:2048"
	I1202 19:18:05.969554   40272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 19:18:05.969676   40272 command_runner.go:130] > # default_ulimits = [
	I1202 19:18:05.969684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.969691   40272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 19:18:05.969900   40272 command_runner.go:130] > # no_pivot = false
	I1202 19:18:05.969912   40272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 19:18:05.969920   40272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 19:18:05.970109   40272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 19:18:05.970119   40272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 19:18:05.970124   40272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 19:18:05.970131   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970227   40272 command_runner.go:130] > # conmon = ""
	I1202 19:18:05.970236   40272 command_runner.go:130] > # Cgroup setting for conmon
	I1202 19:18:05.970244   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 19:18:05.970379   40272 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 19:18:05.970389   40272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 19:18:05.970395   40272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 19:18:05.970403   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970521   40272 command_runner.go:130] > # conmon_env = [
	I1202 19:18:05.970671   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970681   40272 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 19:18:05.970687   40272 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 19:18:05.970693   40272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 19:18:05.970697   40272 command_runner.go:130] > # default_env = [
	I1202 19:18:05.970827   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970837   40272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 19:18:05.970846   40272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 19:18:05.970995   40272 command_runner.go:130] > # selinux = false
	I1202 19:18:05.971005   40272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 19:18:05.971014   40272 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 19:18:05.971019   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971123   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.971133   40272 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 19:18:05.971140   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971283   40272 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 19:18:05.971297   40272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 19:18:05.971349   40272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 19:18:05.971394   40272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 19:18:05.971420   40272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 19:18:05.971426   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971532   40272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 19:18:05.971542   40272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 19:18:05.971554   40272 command_runner.go:130] > # the cgroup blockio controller.
	I1202 19:18:05.971691   40272 command_runner.go:130] > # blockio_config_file = ""
	I1202 19:18:05.971702   40272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 19:18:05.971706   40272 command_runner.go:130] > # blockio parameters.
	I1202 19:18:05.971888   40272 command_runner.go:130] > # blockio_reload = false
	I1202 19:18:05.971899   40272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 19:18:05.971911   40272 command_runner.go:130] > # irqbalance daemon.
	I1202 19:18:05.972089   40272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 19:18:05.972099   40272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 19:18:05.972107   40272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 19:18:05.972118   40272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 19:18:05.972238   40272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 19:18:05.972249   40272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 19:18:05.972255   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.972373   40272 command_runner.go:130] > # rdt_config_file = ""
	I1202 19:18:05.972382   40272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 19:18:05.972510   40272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 19:18:05.972521   40272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 19:18:05.972668   40272 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 19:18:05.972679   40272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 19:18:05.972686   40272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 19:18:05.972689   40272 command_runner.go:130] > # will be added.
	I1202 19:18:05.972804   40272 command_runner.go:130] > # default_capabilities = [
	I1202 19:18:05.972909   40272 command_runner.go:130] > # 	"CHOWN",
	I1202 19:18:05.973035   40272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 19:18:05.973186   40272 command_runner.go:130] > # 	"FSETID",
	I1202 19:18:05.973194   40272 command_runner.go:130] > # 	"FOWNER",
	I1202 19:18:05.973322   40272 command_runner.go:130] > # 	"SETGID",
	I1202 19:18:05.973468   40272 command_runner.go:130] > # 	"SETUID",
	I1202 19:18:05.973500   40272 command_runner.go:130] > # 	"SETPCAP",
	I1202 19:18:05.973632   40272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 19:18:05.973847   40272 command_runner.go:130] > # 	"KILL",
	I1202 19:18:05.973855   40272 command_runner.go:130] > # ]
	I1202 19:18:05.973864   40272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 19:18:05.973870   40272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 19:18:05.974039   40272 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 19:18:05.974052   40272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 19:18:05.974059   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974062   40272 command_runner.go:130] > default_sysctls = [
	I1202 19:18:05.974148   40272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 19:18:05.974179   40272 command_runner.go:130] > ]
	I1202 19:18:05.974185   40272 command_runner.go:130] > # List of devices on the host that a
	I1202 19:18:05.974297   40272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 19:18:05.974459   40272 command_runner.go:130] > # allowed_devices = [
	I1202 19:18:05.974492   40272 command_runner.go:130] > # 	"/dev/fuse",
	I1202 19:18:05.974497   40272 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 19:18:05.974500   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974505   40272 command_runner.go:130] > # List of additional devices. specified as
	I1202 19:18:05.974517   40272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 19:18:05.974706   40272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 19:18:05.974717   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974722   40272 command_runner.go:130] > # additional_devices = [
	I1202 19:18:05.974730   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974735   40272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 19:18:05.974870   40272 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 19:18:05.975061   40272 command_runner.go:130] > # 	"/etc/cdi",
	I1202 19:18:05.975069   40272 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 19:18:05.975204   40272 command_runner.go:130] > # ]
	I1202 19:18:05.975337   40272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 19:18:05.975610   40272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 19:18:05.975708   40272 command_runner.go:130] > # Defaults to false.
	I1202 19:18:05.975730   40272 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 19:18:05.975766   40272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 19:18:05.975927   40272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 19:18:05.976135   40272 command_runner.go:130] > # hooks_dir = [
	I1202 19:18:05.976173   40272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 19:18:05.976199   40272 command_runner.go:130] > # ]
	I1202 19:18:05.976222   40272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 19:18:05.976257   40272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 19:18:05.976344   40272 command_runner.go:130] > # its default mounts from the following two files:
	I1202 19:18:05.976363   40272 command_runner.go:130] > #
	I1202 19:18:05.976438   40272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 19:18:05.976465   40272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 19:18:05.976485   40272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 19:18:05.976561   40272 command_runner.go:130] > #
	I1202 19:18:05.976637   40272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 19:18:05.976658   40272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 19:18:05.976681   40272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 19:18:05.976711   40272 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 19:18:05.976797   40272 command_runner.go:130] > #
	I1202 19:18:05.976852   40272 command_runner.go:130] > # default_mounts_file = ""
	I1202 19:18:05.976886   40272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 19:18:05.976912   40272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 19:18:05.976930   40272 command_runner.go:130] > # pids_limit = -1
	I1202 19:18:05.977014   40272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 19:18:05.977040   40272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 19:18:05.977112   40272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 19:18:05.977136   40272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 19:18:05.977153   40272 command_runner.go:130] > # log_size_max = -1
	I1202 19:18:05.977240   40272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 19:18:05.977264   40272 command_runner.go:130] > # log_to_journald = false
	I1202 19:18:05.977344   40272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 19:18:05.977370   40272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 19:18:05.977390   40272 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 19:18:05.977478   40272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 19:18:05.977500   40272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 19:18:05.977570   40272 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 19:18:05.977596   40272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 19:18:05.977614   40272 command_runner.go:130] > # read_only = false
	I1202 19:18:05.977722   40272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 19:18:05.977797   40272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 19:18:05.977817   40272 command_runner.go:130] > # live configuration reload.
	I1202 19:18:05.977836   40272 command_runner.go:130] > # log_level = "info"
	I1202 19:18:05.977872   40272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 19:18:05.977956   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.978011   40272 command_runner.go:130] > # log_filter = ""
	I1202 19:18:05.978051   40272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978073   40272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 19:18:05.978093   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978128   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978214   40272 command_runner.go:130] > # uid_mappings = ""
	I1202 19:18:05.978236   40272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978257   40272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 19:18:05.978338   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978377   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978397   40272 command_runner.go:130] > # gid_mappings = ""
	I1202 19:18:05.978483   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 19:18:05.978556   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978583   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978606   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978700   40272 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 19:18:05.978728   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 19:18:05.978805   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978827   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978909   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978941   40272 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 19:18:05.979022   40272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 19:18:05.979049   40272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 19:18:05.979139   40272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 19:18:05.979164   40272 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 19:18:05.979239   40272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 19:18:05.979264   40272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 19:18:05.979291   40272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 19:18:05.979376   40272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 19:18:05.979411   40272 command_runner.go:130] > # drop_infra_ctr = true
	I1202 19:18:05.979493   40272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 19:18:05.979517   40272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 19:18:05.979541   40272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 19:18:05.979625   40272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 19:18:05.979649   40272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 19:18:05.979723   40272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 19:18:05.979744   40272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 19:18:05.979763   40272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 19:18:05.979845   40272 command_runner.go:130] > # shared_cpuset = ""
	I1202 19:18:05.979867   40272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 19:18:05.979937   40272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 19:18:05.979961   40272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 19:18:05.979983   40272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 19:18:05.980069   40272 command_runner.go:130] > # pinns_path = ""
	I1202 19:18:05.980091   40272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 19:18:05.980113   40272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 19:18:05.980205   40272 command_runner.go:130] > # enable_criu_support = true
	I1202 19:18:05.980225   40272 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 19:18:05.980246   40272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 19:18:05.980337   40272 command_runner.go:130] > # enable_pod_events = false
	I1202 19:18:05.980364   40272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 19:18:05.980435   40272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 19:18:05.980456   40272 command_runner.go:130] > # default_runtime = "crun"
	I1202 19:18:05.980476   40272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 19:18:05.980567   40272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 19:18:05.980641   40272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 19:18:05.980666   40272 command_runner.go:130] > # creation as a file is not desired either.
	I1202 19:18:05.980689   40272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 19:18:05.980782   40272 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 19:18:05.980807   40272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 19:18:05.980885   40272 command_runner.go:130] > # ]
	I1202 19:18:05.980907   40272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 19:18:05.980989   40272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 19:18:05.981060   40272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 19:18:05.981080   40272 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 19:18:05.981155   40272 command_runner.go:130] > #
	I1202 19:18:05.981180   40272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 19:18:05.981237   40272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 19:18:05.981273   40272 command_runner.go:130] > # runtime_type = "oci"
	I1202 19:18:05.981291   40272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 19:18:05.981311   40272 command_runner.go:130] > # inherit_default_runtime = false
	I1202 19:18:05.981423   40272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 19:18:05.981442   40272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 19:18:05.981461   40272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 19:18:05.981479   40272 command_runner.go:130] > # monitor_env = []
	I1202 19:18:05.981507   40272 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 19:18:05.981530   40272 command_runner.go:130] > # allowed_annotations = []
	I1202 19:18:05.981553   40272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 19:18:05.981571   40272 command_runner.go:130] > # no_sync_log = false
	I1202 19:18:05.981591   40272 command_runner.go:130] > # default_annotations = {}
	I1202 19:18:05.981620   40272 command_runner.go:130] > # stream_websockets = false
	I1202 19:18:05.981644   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.981733   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.981765   40272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 19:18:05.981785   40272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 19:18:05.981807   40272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 19:18:05.981914   40272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 19:18:05.981934   40272 command_runner.go:130] > #   in $PATH.
	I1202 19:18:05.981954   40272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 19:18:05.981989   40272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 19:18:05.982017   40272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 19:18:05.982034   40272 command_runner.go:130] > #   state.
	I1202 19:18:05.982057   40272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 19:18:05.982098   40272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 19:18:05.982128   40272 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 19:18:05.982148   40272 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 19:18:05.982168   40272 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 19:18:05.982199   40272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 19:18:05.982235   40272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 19:18:05.982255   40272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 19:18:05.982277   40272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 19:18:05.982307   40272 command_runner.go:130] > #   The currently recognized values are:
	I1202 19:18:05.982329   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 19:18:05.983678   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 19:18:05.983703   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 19:18:05.983795   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 19:18:05.983829   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 19:18:05.983905   40272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 19:18:05.983938   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 19:18:05.983958   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 19:18:05.983978   40272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 19:18:05.984011   40272 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 19:18:05.984040   40272 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 19:18:05.984061   40272 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 19:18:05.984082   40272 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 19:18:05.984114   40272 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 19:18:05.984143   40272 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 19:18:05.984168   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 19:18:05.984191   40272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 19:18:05.984220   40272 command_runner.go:130] > #   deprecated option "conmon".
	I1202 19:18:05.984244   40272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 19:18:05.984265   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 19:18:05.984298   40272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 19:18:05.984320   40272 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 19:18:05.984343   40272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 19:18:05.984373   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 19:18:05.984413   40272 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 19:18:05.984432   40272 command_runner.go:130] > #   conmon-rs by using:
	I1202 19:18:05.984470   40272 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 19:18:05.984495   40272 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 19:18:05.984515   40272 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 19:18:05.984549   40272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 19:18:05.984571   40272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 19:18:05.984595   40272 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 19:18:05.984630   40272 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 19:18:05.984653   40272 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 19:18:05.984677   40272 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 19:18:05.984716   40272 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 19:18:05.984737   40272 command_runner.go:130] > #   when a machine crash happens.
	I1202 19:18:05.984765   40272 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 19:18:05.984801   40272 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 19:18:05.984825   40272 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 19:18:05.984846   40272 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 19:18:05.984877   40272 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 19:18:05.984902   40272 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 19:18:05.984921   40272 command_runner.go:130] > #
	I1202 19:18:05.984958   40272 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 19:18:05.984976   40272 command_runner.go:130] > #
	I1202 19:18:05.984996   40272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 19:18:05.985026   40272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 19:18:05.985052   40272 command_runner.go:130] > #
	I1202 19:18:05.985075   40272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 19:18:05.985099   40272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 19:18:05.985125   40272 command_runner.go:130] > #
	I1202 19:18:05.985149   40272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 19:18:05.985169   40272 command_runner.go:130] > # feature.
	I1202 19:18:05.985199   40272 command_runner.go:130] > #
	I1202 19:18:05.985224   40272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 19:18:05.985244   40272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 19:18:05.985274   40272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 19:18:05.985304   40272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 19:18:05.985329   40272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 19:18:05.985349   40272 command_runner.go:130] > #
	I1202 19:18:05.985381   40272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 19:18:05.985404   40272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 19:18:05.985422   40272 command_runner.go:130] > #
	I1202 19:18:05.985454   40272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 19:18:05.985482   40272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 19:18:05.985497   40272 command_runner.go:130] > #
	I1202 19:18:05.985518   40272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 19:18:05.985550   40272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 19:18:05.985582   40272 command_runner.go:130] > # limitation.
	I1202 19:18:05.985602   40272 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 19:18:05.985622   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 19:18:05.985670   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985689   40272 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 19:18:05.985704   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985709   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985725   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985731   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985741   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985745   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985749   40272 command_runner.go:130] > allowed_annotations = [
	I1202 19:18:05.985754   40272 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 19:18:05.985759   40272 command_runner.go:130] > ]
	I1202 19:18:05.985765   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985769   40272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 19:18:05.985782   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 19:18:05.985786   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985795   40272 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 19:18:05.985801   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985810   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985821   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985829   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985833   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985837   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985845   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985852   40272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 19:18:05.985860   40272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 19:18:05.985867   40272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 19:18:05.985881   40272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 19:18:05.985892   40272 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 19:18:05.985905   40272 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 19:18:05.985915   40272 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 19:18:05.985926   40272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 19:18:05.985936   40272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 19:18:05.985947   40272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 19:18:05.985953   40272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 19:18:05.985964   40272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 19:18:05.985968   40272 command_runner.go:130] > # Example:
	I1202 19:18:05.985975   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 19:18:05.985980   40272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 19:18:05.985987   40272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 19:18:05.985993   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 19:18:05.985996   40272 command_runner.go:130] > # cpuset = "0-1"
	I1202 19:18:05.986000   40272 command_runner.go:130] > # cpushares = "5"
	I1202 19:18:05.986007   40272 command_runner.go:130] > # cpuquota = "1000"
	I1202 19:18:05.986011   40272 command_runner.go:130] > # cpuperiod = "100000"
	I1202 19:18:05.986014   40272 command_runner.go:130] > # cpulimit = "35"
	I1202 19:18:05.986018   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.986025   40272 command_runner.go:130] > # The workload name is workload-type.
	I1202 19:18:05.986033   40272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 19:18:05.986041   40272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 19:18:05.986047   40272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 19:18:05.986057   40272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 19:18:05.986069   40272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 19:18:05.986075   40272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 19:18:05.986082   40272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 19:18:05.986086   40272 command_runner.go:130] > # Default value is set to true
	I1202 19:18:05.986096   40272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 19:18:05.986102   40272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 19:18:05.986107   40272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 19:18:05.986117   40272 command_runner.go:130] > # Default value is set to 'false'
	I1202 19:18:05.986121   40272 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 19:18:05.986127   40272 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 19:18:05.986137   40272 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 19:18:05.986142   40272 command_runner.go:130] > # timezone = ""
	I1202 19:18:05.986151   40272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 19:18:05.986154   40272 command_runner.go:130] > #
	I1202 19:18:05.986160   40272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 19:18:05.986171   40272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 19:18:05.986178   40272 command_runner.go:130] > [crio.image]
	I1202 19:18:05.986184   40272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 19:18:05.986189   40272 command_runner.go:130] > # default_transport = "docker://"
	I1202 19:18:05.986197   40272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 19:18:05.986205   40272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986212   40272 command_runner.go:130] > # global_auth_file = ""
	I1202 19:18:05.986217   40272 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 19:18:05.986223   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986230   40272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.986237   40272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 19:18:05.986243   40272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986248   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986255   40272 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 19:18:05.986260   40272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 19:18:05.986266   40272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 19:18:05.986275   40272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 19:18:05.986281   40272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 19:18:05.986291   40272 command_runner.go:130] > # pause_command = "/pause"
	I1202 19:18:05.986301   40272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 19:18:05.986309   40272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 19:18:05.986319   40272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 19:18:05.986324   40272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 19:18:05.986331   40272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 19:18:05.986337   40272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 19:18:05.986343   40272 command_runner.go:130] > # pinned_images = [
	I1202 19:18:05.986346   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986352   40272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 19:18:05.986360   40272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 19:18:05.986367   40272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 19:18:05.986376   40272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 19:18:05.986381   40272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 19:18:05.986388   40272 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 19:18:05.986394   40272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 19:18:05.986401   40272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 19:18:05.986415   40272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 19:18:05.986422   40272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 19:18:05.986431   40272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 19:18:05.986436   40272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 19:18:05.986442   40272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 19:18:05.986452   40272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 19:18:05.986456   40272 command_runner.go:130] > # changing them here.
	I1202 19:18:05.986462   40272 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 19:18:05.986468   40272 command_runner.go:130] > # insecure_registries = [
	I1202 19:18:05.986472   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986478   40272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 19:18:05.986486   40272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 19:18:05.986490   40272 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 19:18:05.986495   40272 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 19:18:05.986499   40272 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 19:18:05.986505   40272 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 19:18:05.986518   40272 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 19:18:05.986525   40272 command_runner.go:130] > # auto_reload_registries = false
	I1202 19:18:05.986531   40272 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 19:18:05.986543   40272 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 19:18:05.986549   40272 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 19:18:05.986556   40272 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 19:18:05.986561   40272 command_runner.go:130] > # The mode of short name resolution.
	I1202 19:18:05.986568   40272 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 19:18:05.986578   40272 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 19:18:05.986583   40272 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 19:18:05.986588   40272 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 19:18:05.986593   40272 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 19:18:05.986602   40272 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 19:18:05.986606   40272 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 19:18:05.986612   40272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 19:18:05.986619   40272 command_runner.go:130] > # CNI plugins.
	I1202 19:18:05.986623   40272 command_runner.go:130] > [crio.network]
	I1202 19:18:05.986629   40272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 19:18:05.986637   40272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 19:18:05.986640   40272 command_runner.go:130] > # cni_default_network = ""
	I1202 19:18:05.986646   40272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 19:18:05.986655   40272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 19:18:05.986661   40272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 19:18:05.986664   40272 command_runner.go:130] > # plugin_dirs = [
	I1202 19:18:05.986668   40272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 19:18:05.986674   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986678   40272 command_runner.go:130] > # List of included pod metrics.
	I1202 19:18:05.986681   40272 command_runner.go:130] > # included_pod_metrics = [
	I1202 19:18:05.986684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986690   40272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 19:18:05.986696   40272 command_runner.go:130] > [crio.metrics]
	I1202 19:18:05.986701   40272 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 19:18:05.986705   40272 command_runner.go:130] > # enable_metrics = false
	I1202 19:18:05.986718   40272 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 19:18:05.986723   40272 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 19:18:05.986732   40272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 19:18:05.986738   40272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 19:18:05.986744   40272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 19:18:05.986748   40272 command_runner.go:130] > # metrics_collectors = [
	I1202 19:18:05.986753   40272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 19:18:05.986760   40272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 19:18:05.986764   40272 command_runner.go:130] > # 	"containers_oom_total",
	I1202 19:18:05.986768   40272 command_runner.go:130] > # 	"processes_defunct",
	I1202 19:18:05.986777   40272 command_runner.go:130] > # 	"operations_total",
	I1202 19:18:05.986782   40272 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 19:18:05.986787   40272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 19:18:05.986793   40272 command_runner.go:130] > # 	"operations_errors_total",
	I1202 19:18:05.986797   40272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 19:18:05.986802   40272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 19:18:05.986809   40272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 19:18:05.986814   40272 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 19:18:05.986819   40272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 19:18:05.986823   40272 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 19:18:05.986829   40272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 19:18:05.986836   40272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 19:18:05.986840   40272 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 19:18:05.986844   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986852   40272 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 19:18:05.986862   40272 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 19:18:05.986870   40272 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 19:18:05.986877   40272 command_runner.go:130] > # metrics_port = 9090
	I1202 19:18:05.986882   40272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 19:18:05.986886   40272 command_runner.go:130] > # metrics_socket = ""
	I1202 19:18:05.986893   40272 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 19:18:05.986899   40272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 19:18:05.986906   40272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 19:18:05.986918   40272 command_runner.go:130] > # certificate on any modification event.
	I1202 19:18:05.986933   40272 command_runner.go:130] > # metrics_cert = ""
	I1202 19:18:05.986939   40272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 19:18:05.986947   40272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 19:18:05.986950   40272 command_runner.go:130] > # metrics_key = ""
	I1202 19:18:05.986956   40272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 19:18:05.986962   40272 command_runner.go:130] > [crio.tracing]
	I1202 19:18:05.986967   40272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 19:18:05.986972   40272 command_runner.go:130] > # enable_tracing = false
	I1202 19:18:05.986979   40272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 19:18:05.986984   40272 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 19:18:05.986990   40272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 19:18:05.986997   40272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 19:18:05.987001   40272 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 19:18:05.987007   40272 command_runner.go:130] > [crio.nri]
	I1202 19:18:05.987011   40272 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 19:18:05.987015   40272 command_runner.go:130] > # enable_nri = true
	I1202 19:18:05.987019   40272 command_runner.go:130] > # NRI socket to listen on.
	I1202 19:18:05.987029   40272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 19:18:05.987033   40272 command_runner.go:130] > # NRI plugin directory to use.
	I1202 19:18:05.987037   40272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 19:18:05.987045   40272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 19:18:05.987050   40272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 19:18:05.987056   40272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 19:18:05.987116   40272 command_runner.go:130] > # nri_disable_connections = false
	I1202 19:18:05.987126   40272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 19:18:05.987130   40272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 19:18:05.987136   40272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 19:18:05.987142   40272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 19:18:05.987147   40272 command_runner.go:130] > # NRI default validator configuration.
	I1202 19:18:05.987157   40272 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 19:18:05.987166   40272 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 19:18:05.987170   40272 command_runner.go:130] > # can be restricted/rejected:
	I1202 19:18:05.987178   40272 command_runner.go:130] > # - OCI hook injection
	I1202 19:18:05.987186   40272 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 19:18:05.987191   40272 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 19:18:05.987196   40272 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 19:18:05.987203   40272 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 19:18:05.987209   40272 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 19:18:05.987216   40272 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 19:18:05.987225   40272 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 19:18:05.987230   40272 command_runner.go:130] > #
	I1202 19:18:05.987234   40272 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 19:18:05.987239   40272 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 19:18:05.987245   40272 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 19:18:05.987254   40272 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 19:18:05.987260   40272 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 19:18:05.987268   40272 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 19:18:05.987279   40272 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 19:18:05.987283   40272 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 19:18:05.987286   40272 command_runner.go:130] > # ]
	I1202 19:18:05.987291   40272 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 19:18:05.987299   40272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 19:18:05.987302   40272 command_runner.go:130] > [crio.stats]
	I1202 19:18:05.987308   40272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 19:18:05.987316   40272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 19:18:05.987320   40272 command_runner.go:130] > # stats_collection_period = 0
	I1202 19:18:05.987326   40272 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 19:18:05.987334   40272 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 19:18:05.987344   40272 command_runner.go:130] > # collection_period = 0
	I1202 19:18:05.987392   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941536561Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 19:18:05.987405   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941573139Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 19:18:05.987421   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941598771Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 19:18:05.987431   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941629007Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 19:18:05.987447   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.94184771Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.987460   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.942236436Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 19:18:05.987477   40272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 19:18:05.987606   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:05.987620   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:05.987644   40272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:18:05.987670   40272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:18:05.987799   40272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:18:05.987877   40272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:18:05.995250   40272 command_runner.go:130] > kubeadm
	I1202 19:18:05.995271   40272 command_runner.go:130] > kubectl
	I1202 19:18:05.995276   40272 command_runner.go:130] > kubelet
	I1202 19:18:05.995308   40272 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:18:05.995379   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:18:06.002605   40272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:18:06.015240   40272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:18:06.033933   40272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:18:06.047469   40272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:18:06.051453   40272 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 19:18:06.051580   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:06.161840   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:06.543709   40272 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:18:06.543774   40272 certs.go:195] generating shared ca certs ...
	I1202 19:18:06.543803   40272 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:06.543968   40272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:18:06.544037   40272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:18:06.544058   40272 certs.go:257] generating profile certs ...
	I1202 19:18:06.544203   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:18:06.544311   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:18:06.544381   40272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:18:06.544424   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:18:06.544458   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:18:06.544493   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:18:06.544537   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:18:06.544570   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:18:06.544599   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:18:06.544648   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:18:06.544683   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:18:06.544773   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:18:06.544828   40272 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:18:06.544854   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:18:06.544932   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:18:06.551062   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:18:06.551141   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:18:06.551220   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:06.551261   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.551291   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.551312   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.552213   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:18:06.569384   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:18:06.587883   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:18:06.609527   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:18:06.628039   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:18:06.644623   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:18:06.662478   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:18:06.679440   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:18:06.696330   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:18:06.713584   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:18:06.731033   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:18:06.747714   40272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:18:06.761265   40272 ssh_runner.go:195] Run: openssl version
	I1202 19:18:06.766652   40272 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 19:18:06.767017   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:18:06.774639   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.777834   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778051   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778107   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.818127   40272 command_runner.go:130] > b5213941
	I1202 19:18:06.818625   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:18:06.826391   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:18:06.834719   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838324   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838367   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838418   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.878978   40272 command_runner.go:130] > 51391683
	I1202 19:18:06.879420   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:18:06.887230   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:18:06.895470   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899261   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899287   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899335   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.940199   40272 command_runner.go:130] > 3ec20f2e
	I1202 19:18:06.940694   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:18:06.948359   40272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951793   40272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951816   40272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 19:18:06.951822   40272 command_runner.go:130] > Device: 259,1	Inode: 1315539     Links: 1
	I1202 19:18:06.951851   40272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:06.951865   40272 command_runner.go:130] > Access: 2025-12-02 19:13:58.595474405 +0000
	I1202 19:18:06.951871   40272 command_runner.go:130] > Modify: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951876   40272 command_runner.go:130] > Change: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951881   40272 command_runner.go:130] >  Birth: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951960   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:18:06.996850   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:06.997318   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:18:07.037433   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.037885   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:18:07.078161   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.078666   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:18:07.119364   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.119441   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:18:07.159628   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.160136   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:18:07.204176   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.204662   40272 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:07.204768   40272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:18:07.204851   40272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:18:07.233427   40272 cri.go:89] found id: ""
	I1202 19:18:07.233514   40272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:18:07.240330   40272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 19:18:07.240352   40272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 19:18:07.240359   40272 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 19:18:07.241346   40272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:18:07.241363   40272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:18:07.241437   40272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:18:07.248549   40272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:18:07.248941   40272 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249040   40272 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "functional-374330" cluster setting kubeconfig missing "functional-374330" context setting]
	I1202 19:18:07.249312   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.249749   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249896   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.250443   40272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:18:07.250467   40272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:18:07.250474   40272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:18:07.250478   40272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:18:07.250487   40272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:18:07.250526   40272 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:18:07.250793   40272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:18:07.258519   40272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:18:07.258557   40272 kubeadm.go:602] duration metric: took 17.188352ms to restartPrimaryControlPlane
	I1202 19:18:07.258569   40272 kubeadm.go:403] duration metric: took 53.913832ms to StartCluster
	I1202 19:18:07.258583   40272 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.258647   40272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.259281   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.259482   40272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:18:07.259876   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:07.259927   40272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:18:07.259993   40272 addons.go:70] Setting storage-provisioner=true in profile "functional-374330"
	I1202 19:18:07.260007   40272 addons.go:239] Setting addon storage-provisioner=true in "functional-374330"
	I1202 19:18:07.260034   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.260061   40272 addons.go:70] Setting default-storageclass=true in profile "functional-374330"
	I1202 19:18:07.260107   40272 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-374330"
	I1202 19:18:07.260433   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.260513   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.266365   40272 out.go:179] * Verifying Kubernetes components...
	I1202 19:18:07.269343   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:07.293348   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.293507   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.293796   40272 addons.go:239] Setting addon default-storageclass=true in "functional-374330"
	I1202 19:18:07.293827   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.294253   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.304761   40272 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:18:07.307700   40272 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.307724   40272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:18:07.307789   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.332842   40272 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:07.332860   40272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:18:07.332914   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.347890   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.373144   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.469482   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:07.472955   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.515784   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.293178   40272 node_ready.go:35] waiting up to 6m0s for node "functional-374330" to be "Ready" ...
	I1202 19:18:08.293301   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.293355   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.293568   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293595   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293615   40272 retry.go:31] will retry after 144.187129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293684   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293702   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293710   40272 retry.go:31] will retry after 132.365923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.427169   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.438559   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.510555   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513791   40272 retry.go:31] will retry after 461.570102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513742   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513825   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513833   40272 retry.go:31] will retry after 354.67857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.794133   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.794203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.868974   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.929070   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.932369   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.932402   40272 retry.go:31] will retry after 765.19043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.975575   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.036469   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.042296   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.042376   40272 retry.go:31] will retry after 433.124039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.293618   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.293713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:09.476440   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.538441   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.541412   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.541444   40272 retry.go:31] will retry after 747.346338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.698768   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:09.764666   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.764703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.764723   40272 retry.go:31] will retry after 541.76994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.793827   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.793965   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.794261   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:10.289986   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:10.293340   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.293732   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:10.293780   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:10.307063   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:10.373573   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.373608   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.373627   40272 retry.go:31] will retry after 1.037281057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388739   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.388813   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388864   40272 retry.go:31] will retry after 1.072570226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.794280   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.794348   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.794651   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.293375   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.293466   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.293739   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.411088   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:11.462503   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:11.470558   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.470603   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.470624   40272 retry.go:31] will retry after 2.459470693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530455   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.530510   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530529   40272 retry.go:31] will retry after 2.35440359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.794013   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.794477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:12.294194   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.294271   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:12.294648   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:12.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.793567   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.793595   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.793686   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.794006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.885433   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:13.930854   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:13.940303   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:13.943330   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:13.943359   40272 retry.go:31] will retry after 2.562469282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000907   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:14.000951   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000969   40272 retry.go:31] will retry after 3.172954134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.294316   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.294381   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:14.793366   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.793435   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.793778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:14.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:15.293495   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:15.793590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.793675   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.794004   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.293435   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.506093   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:16.576298   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:16.580372   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.580403   40272 retry.go:31] will retry after 6.193423377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.793925   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.794050   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:16.794410   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:17.174990   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:17.234065   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:17.234161   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.234184   40272 retry.go:31] will retry after 6.017051757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.293565   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.293640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:17.793940   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.794318   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.294120   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.294191   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.294497   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.794258   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.794341   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.794641   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:18.794693   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:19.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:19.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.793693   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.794032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.293712   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.793838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:21.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:21.293929   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:21.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.293417   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.774666   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:22.793983   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.794053   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.835259   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:22.835293   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:22.835313   40272 retry.go:31] will retry after 8.891499319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.251502   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:23.293920   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.293995   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.294305   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:23.294361   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:23.316803   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:23.325390   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.325420   40272 retry.go:31] will retry after 5.436174555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.794140   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.794209   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.794514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.294165   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.294234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.294532   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.794307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.794552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:25.294405   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.294476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.294786   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:25.294838   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:25.793518   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.793593   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.793954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.293881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.793441   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.793515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.793898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.293636   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.294038   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.793924   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.793994   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.794242   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:27.794290   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:28.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.294085   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.294398   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.762126   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:28.793717   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.794058   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.820417   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:28.820461   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:28.820480   40272 retry.go:31] will retry after 5.23527752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:29.294048   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.294387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:29.794183   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.794303   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.794634   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:29.794706   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:30.294267   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.294340   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.294624   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:30.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.793398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.793762   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.293841   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.727474   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:31.785329   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:31.788538   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.788571   40272 retry.go:31] will retry after 14.027342391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.793764   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.793834   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.794170   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:32.293926   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.293991   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.294245   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:32.294283   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:32.794305   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.794380   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.794731   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.293682   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.294006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:34.056328   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:34.114988   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:34.115034   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.115053   40272 retry.go:31] will retry after 20.825216377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.294372   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.294768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:34.294823   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:34.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.293815   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.293900   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.294151   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.793855   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.793935   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.794205   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.293483   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.793564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.793873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:36.793925   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:37.293668   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.293762   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.294075   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:37.793947   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.794293   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.294087   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.294335   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.794481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:38.794533   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:39.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.294563   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:39.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.794411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.794661   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.793560   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.793636   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:41.293642   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:41.294091   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:41.793737   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.793809   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.794119   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:42.294249   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.294351   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.295481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1202 19:18:42.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.794309   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.794549   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:43.294307   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.294779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:43.294833   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:43.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.793526   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.293539   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.293609   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.293775   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.294288   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.794074   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.794139   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:45.794427   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:45.816754   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:45.885215   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:45.888326   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:45.888364   40272 retry.go:31] will retry after 11.821193731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:46.293908   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.293987   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.294332   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:46.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.794188   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.794450   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.294325   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.294656   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.793465   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:48.293461   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.293549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:48.293980   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:48.793521   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.793585   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.793925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.293671   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.293755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.294085   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.793786   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.793857   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.794203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:50.293936   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.294005   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.294362   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:50.794095   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.794170   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.794494   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.294326   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.294720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:52.793945   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:53.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.293667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.293927   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:53.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.793852   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.794188   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.294005   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.294075   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.294426   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.794205   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.794284   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.794553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:54.794600   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:54.941002   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:55.004086   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:55.004129   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.004148   40272 retry.go:31] will retry after 20.918145005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.293488   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.293564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.293885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:55.793617   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.793707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.794018   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.293767   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.793648   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.793755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.794090   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:57.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.293891   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.294211   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:57.294263   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:57.710107   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:57.765891   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:57.765928   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.765947   40272 retry.go:31] will retry after 13.115816401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.793988   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.794063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.794301   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.294217   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.793430   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.793738   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.293442   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.293550   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.793871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:59.793930   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:00.295673   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.295757   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.296162   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:00.793971   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.794393   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.294295   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.294639   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.793817   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:02.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:02.293931   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:02.793522   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.793600   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.293690   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.293758   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.294007   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.793884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:04.293572   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:04.294031   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:04.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.793792   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.793473   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.793568   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.793916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.293673   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.293971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.793528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:06.793897   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:07.293734   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.293806   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.294152   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:07.793956   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.794035   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.794289   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.294051   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.294130   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.294477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.794232   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.794588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:08.794644   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:09.294344   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.294413   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.294705   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:09.793394   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.882157   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:10.938212   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:10.938272   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:10.938296   40272 retry.go:31] will retry after 16.990081142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:11.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.293533   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:11.293912   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:11.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.793893   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.293805   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.793829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:13.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.293887   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:13.293939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:13.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.793901   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.293451   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.293545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.793538   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.793612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.793947   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.293500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.293781   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:15.793881   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:15.923138   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:15.976380   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:15.979446   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:15.979475   40272 retry.go:31] will retry after 43.938975662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:16.293891   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.293966   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.294319   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:16.793918   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.794007   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.794273   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.293817   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.293889   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.294222   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.794224   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.794322   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.794659   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:17.794718   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:18.293644   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.293745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:18.793819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.793896   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.794214   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.294047   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.294429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.794155   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.794251   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.794516   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:20.294336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.294409   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.294750   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:20.294804   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:20.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.293392   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.793880   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.793814   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.794072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:22.794110   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:23.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.293552   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:23.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.793520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.293676   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.793402   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.793777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:25.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:25.293933   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:25.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.793822   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.293870   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.794001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.293786   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.293876   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:27.294188   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:27.794144   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.794229   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.928884   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:27.980862   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983877   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983967   40272 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:28.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.293635   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.293939   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:28.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.293888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:29.793943   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:30.293604   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.293690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.293949   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:30.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.793541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.793879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.293681   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.294045   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.793596   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:31.793973   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:32.293633   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.293736   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.294100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:32.794048   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.794127   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.794454   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.294107   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.294193   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.294469   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.794161   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.794241   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.794576   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:33.794630   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:34.294318   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.294390   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.294756   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:34.793348   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.793816   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.293934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.793853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:36.293403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.293796   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:36.293849   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:36.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.793604   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.793910   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.293819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.293921   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.294237   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.793992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.794062   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.794317   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:38.294129   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.294219   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.294552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:38.294607   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:38.794375   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.794449   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.794753   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.293464   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.793609   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.793726   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.793971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:40.794046   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:41.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.293783   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.294101   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:41.793762   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.793835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.794208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.293532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.793895   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.793974   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.794274   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:42.794330   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:43.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.293536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:43.793403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.793470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.793794   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.793570   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.793981   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:45.293992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.294153   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.294968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:45.295095   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:45.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.793517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.293433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.793672   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.794005   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.294181   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.794191   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.794264   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.794574   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:47.794634   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:48.294351   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.294414   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.294658   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:48.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.793458   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.293548   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.293622   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.793638   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.793723   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.793982   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:50.293669   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.293738   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.294063   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:50.294115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:50.793649   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.794030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.293404   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.293477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.793444   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.293605   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.293689   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.794056   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.794307   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:52.794355   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:53.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.294542   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:53.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.794789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.293367   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.293448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.793399   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:55.293465   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.293912   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:55.293970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:55.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.793748   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.293378   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.293444   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.293784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.793485   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:57.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.293823   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:57.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:57.794072   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.794142   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.294203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.294515   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.794402   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.794662   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.293346   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.293443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.793412   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:59.793894   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:59.919155   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:59.978732   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978768   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978842   40272 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:59.981270   40272 out.go:179] * Enabled addons: 
	I1202 19:19:59.984008   40272 addons.go:530] duration metric: took 1m52.724080055s for enable addons: enabled=[]
	I1202 19:20:00.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.319155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=25
	I1202 19:20:00.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.793581   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.293643   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.294269   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.794085   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:01.794475   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:02.294283   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.294801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:02.793839   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.793918   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.794224   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.293780   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.293848   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.294097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.793818   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.793890   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.794190   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:04.294069   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.294138   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.294439   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:04.294488   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:04.794180   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.794261   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.794525   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.294270   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.294339   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.294637   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.793358   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.793447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.793770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.794145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:06.794195   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:07.293975   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.294054   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.294413   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:07.794308   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.794425   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.794772   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.293671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.294020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:09.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.293769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:09.293828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:09.794253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.794326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.794686   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:11.293475   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.293548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:11.293934   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:11.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.293544   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.293610   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.293915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.793833   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.793916   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.794241   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:13.293799   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.293872   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.294179   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:13.294238   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:13.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.794022   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.794276   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.294026   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.294105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.294453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.794135   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.794207   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:15.294253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.294326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:15.294638   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:15.793355   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.793426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.793551   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.793621   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.293774   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.293867   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.794117   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.794213   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.794539   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:17.794594   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:18.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.294374   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:18.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.794070   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:20.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.293900   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:20.293961   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:20.793436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.293924   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.793463   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.793956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.293478   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.793771   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:22.793827   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:23.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:23.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.293436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.293506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:24.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:25.293608   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.293707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.294025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:25.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.794022   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:26.794082   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:27.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.293785   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.294032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:27.793959   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.294157   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.294237   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.294582   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.794354   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.794429   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.794706   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:28.794758   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:29.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:29.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.293432   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.293782   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.793582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:31.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.293580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:31.293985   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:31.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.793797   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.793874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.794194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:33.293954   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.294018   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.294268   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:33.294307   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:33.794022   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.794093   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.794394   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.294075   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.294145   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.294479   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.794081   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.794161   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.794411   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:35.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.294307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.294631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:35.294684   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:35.794291   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.794361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.794710   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.294383   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.294672   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.793869   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.293817   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.294175   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.794113   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.794365   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:37.794404   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:38.294151   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.294567   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:38.794364   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.794441   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.794795   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.794051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:40.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.293749   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:40.294131   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:40.793755   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.794137   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.293804   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.293874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.294208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.794044   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.794437   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:42.294271   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.294354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.294638   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:42.294682   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:42.793464   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.293529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.293884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.793555   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.793904   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.293677   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.793724   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.793796   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:44.794158   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:45.293768   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.293839   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.294135   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:45.794039   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.294279   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.294679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.793388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.793455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:47.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.293786   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.294051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:47.294093   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:47.794031   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.794101   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.294153   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.294227   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.294472   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.794239   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.794680   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.293461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.293815   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.793404   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.793801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:49.793850   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:50.293494   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.293926   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:50.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.793579   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.293925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.794124   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:51.794181   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:52.293850   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.293930   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.294277   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:52.794083   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.794149   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.794406   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.294121   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.294195   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.294529   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.794350   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.794679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:53.794733   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:54.293471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.293541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:54.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:56.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.293455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:56.293831   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:56.793498   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.793574   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.793934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.293700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.293941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.793858   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.793928   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.794244   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:58.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.294083   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.294416   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:58.294470   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:58.794152   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.794222   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.794483   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.294312   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.294645   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.794292   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.794364   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.794674   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.293476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.293799   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.793832   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:00.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:01.293577   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:01.793727   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.793804   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.293823   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.293903   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.294253   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.794285   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.794354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.794650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:02.794701   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:03.293400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.293470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:03.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.293824   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.793783   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:05.293327   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.293398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:05.293767   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:05.794396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.794464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.794774   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.293683   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.793543   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:07.293810   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.293905   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.294228   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:07.294294   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:07.794228   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.794296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.794557   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.294314   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.294391   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.294721   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.793513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.293515   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.793507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.793849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:09.793915   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:10.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.293946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:10.793633   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.793713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.794014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.293862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.293767   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:12.293819   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:12.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.293560   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.293641   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:14.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.293853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:14.293920   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:14.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.293520   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.293586   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.793540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.793613   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:16.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.293615   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:16.293998   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:16.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.293689   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.293770   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.793898   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.793968   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.794294   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:18.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.294082   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.294374   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:18.294428   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:18.794173   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.794258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.794584   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.294375   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.294447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.294755   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.793492   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.793769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.793542   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.793614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.793957   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:20.794013   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:21.293675   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.293740   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:21.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.293837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.793766   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.793836   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.794155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:22.794204   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:23.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:23.793615   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.794078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.793860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:25.293571   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.293642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.293963   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:25.294010   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:25.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.793479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.793840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.793506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:27.293759   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.294093   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:27.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:27.794030   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.794105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.794432   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.294126   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.294546   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.794342   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.794587   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.293336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.793558   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:29.794070   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:30.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.293704   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:30.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.793500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:32.293467   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.293899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:32.293955   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:32.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.793527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.293566   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.293634   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.793481   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.793759   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:34.793805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:35.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.293507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:35.793599   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.793691   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.293780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.793879   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.793947   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.794270   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:36.794327   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:37.294002   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.294382   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:37.794293   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.794366   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.794623   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.293793   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.793479   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.793551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.793911   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:39.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:39.293900   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:39.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.793400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.793469   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.293410   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.293820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.793779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:41.793832   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:42.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:42.793809   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.793881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.794230   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.794300   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.794607   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:43.794654   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:44.294246   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.294318   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:44.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.793399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.793724   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.793836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:46.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.293848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:46.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:46.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.793766   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.293717   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.294035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.793981   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.794397   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:48.293997   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.294340   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:48.294384   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:48.794112   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.794192   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.794535   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.294292   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.794401   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.794648   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.293343   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.293431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.293749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.793332   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.793431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.793733   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:50.793781   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:51.294382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.294749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:51.794404   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.794484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.794827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.793741   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.794061   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:52.794098   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:53.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.293502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.293842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:53.793547   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.793619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.293686   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.293772   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:55.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.293522   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:55.293916   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:55.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.793966   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.793700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.794037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:57.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.293812   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.294147   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:57.294199   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:57.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.794029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.794360   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.294144   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.294215   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.294530   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.794311   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.794384   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.794669   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.293382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.293457   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.793915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:59.793970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:00.294203   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.294291   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:00.794373   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.794448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.794765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.793408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:02.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.293521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.293831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:02.293882   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:02.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.793524   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.294092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.793779   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.793863   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:04.294013   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.294096   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.294427   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:04.294479   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:04.794192   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.794518   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.294290   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.294361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.294692   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.293537   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.293889   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.793886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:06.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:07.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.293561   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:07.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.794431   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.294315   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.793325   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.793395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:09.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:09.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:09.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.793938   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.293512   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.293605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.293914   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.793473   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:11.293419   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:11.293911   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:11.793571   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.793667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.793998   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.293707   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.294044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.794038   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.794457   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:13.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.294294   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.294608   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:13.294662   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:13.793319   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.793385   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.793631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.293401   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.793974   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.293634   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.293715   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.294019   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.793580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.793905   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:15.793957   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:16.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.293753   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.294105   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:16.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.794139   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.294035   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.294104   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.294447   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.794420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.794500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.794802   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:17.794864   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:18.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:18.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.793908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.793487   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:20.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:20.294043   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:20.793747   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.793818   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.293829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.294078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.793486   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.293599   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.293684   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.293961   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.793847   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.793919   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.794173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:22.794221   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:23.294004   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.294391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:23.794182   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.794569   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.294310   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.294382   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.294678   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:25.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.293849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:25.293899   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:25.793411   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.793784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.293511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:27.293716   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.293790   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:27.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:27.794020   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.794114   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.294228   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.294302   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.294604   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.794372   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.794442   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.793369   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.793452   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.793775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:29.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:30.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:30.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.793820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.293618   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.293975   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.793639   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.793724   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.794026   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:31.794076   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:32.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.293867   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:32.793458   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.793534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.293479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.293808   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.793577   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:34.293638   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.293733   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.294053   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:34.294138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:34.793757   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.794123   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.293805   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.293875   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.294212   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.793796   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.793870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.794183   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:36.293916   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.293981   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.294225   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:36.294266   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:36.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.794051   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.794349   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.294147   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.294225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.294553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.794437   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.794726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.293504   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.793561   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.793979   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:38.794037   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:39.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.293812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:39.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.793508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.293825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.793461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.793725   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:41.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:41.293919   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:41.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.306206   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.306286   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.306588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.793842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:43.293564   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:43.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:43.793719   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.794033   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.293420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.293840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.794225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.794573   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.293335   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.293432   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.293823   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.793584   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.793699   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.794020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:45.794077   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:46.293765   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.294194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:46.793979   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.294352   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.294421   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.294757   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.793514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:48.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.293488   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:48.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:48.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.793896   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.793746   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.794140   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:50.293958   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.294029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.294356   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:50.794160   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.794234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.794577   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.294330   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.294654   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.793400   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.293818   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.793765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:52.793817   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:53.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:53.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.793594   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.793990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.293543   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.293619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.293933   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.793885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:54.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:55.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.293897   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:55.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.793627   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.293469   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.293845   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.793575   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.793643   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.793943   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:56.793996   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:57.293776   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.293861   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:57.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.794158   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.294275   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.294346   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.294665   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.793386   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.793763   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:59.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.293903   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:59.293962   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:59.793451   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.793525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.296332   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.296406   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.296694   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.293498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.793424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:01.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:02.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.293637   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.294144   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:02.793976   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.794047   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.294017   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.294088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.294379   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.794118   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.794444   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:03.794495   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:04.294106   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.294176   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.294496   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:04.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.794365   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.794711   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.793605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.793941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:06.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.293719   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.294067   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:06.294117   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:06.793866   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.793938   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.293887   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.293967   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.294287   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.794150   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.794403   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:08.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.294258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.294594   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:08.294647   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:08.793335   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.793404   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.793760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.793478   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.293956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.793532   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.793599   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:10.793903   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:11.293547   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.293625   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:11.793691   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.793764   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.794076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.793673   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.794066   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:12.794115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:13.293795   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.293870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.294207   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:13.793969   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.794283   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.294039   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.294109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.294436   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.794094   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.794171   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.794488   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:14.794541   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:15.294282   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.294357   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.294611   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:15.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.794443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.794770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.293836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.793477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:17.293700   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:17.294109   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:17.793903   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.793973   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.794593   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.294328   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.294646   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.793322   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.793392   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.793726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.793807   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:19.793870   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:20.293525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.293596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:20.793525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.793601   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.793946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.293705   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.294002   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.793707   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.793780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.794097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:21.794151   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:22.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.293892   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.294246   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:22.794023   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.794088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.794347   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.294098   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.294169   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.294495   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.794344   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.794436   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.794764   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:23.794818   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:24.293402   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.293471   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:24.793418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.793495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.293624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.293973   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.793669   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.793735   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.793985   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:26.293681   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.293789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.294111   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:26.294163   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:26.793710   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.793789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.794114   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.293843   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.293914   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.294239   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.794080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.794155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.794487   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:28.294258   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.294337   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.294650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:28.294705   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:28.793349   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.793701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.294241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.294701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.293509   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.293886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:30.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:31.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:31.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.293492   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.293560   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:33.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.293569   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:33.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:33.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.293678   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.294103   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.793774   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.793844   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.794094   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:35.293808   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.293879   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.294203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:35.294261   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:35.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.794103   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.294141   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.294296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.794385   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.794791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.293721   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.293800   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.294132   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.794036   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.794297   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:37.794344   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:38.294080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.294155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.294482   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:38.794270   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.794347   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.794663   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.293411   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.793476   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.793548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.793865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:40.293455   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.293907   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:40.293963   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:40.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.293444   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.293898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.793891   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.793960   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:42.794326   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:43.294061   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.294133   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.294467   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:43.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.794316   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.294331   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.294411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.294778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.793422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:45.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.293631   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:45.293977   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:45.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.793835   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.293534   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.293612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.294003   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.793541   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.793611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.793878   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:47.293767   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.293837   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.294173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:47.294229   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:47.794221   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.293486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.293760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.293446   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.293944   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.793512   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:49.793918   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:50.293594   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.293685   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.294016   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:50.793739   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.293812   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.293881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.294164   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.793945   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.794024   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.794370   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:51.794425   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:52.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.294180   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.294514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:52.794387   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.794468   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.794736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.793588   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.793680   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.794035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:54.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:54.293865   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:54.793520   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.793596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.793859   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:56.293555   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.293632   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:56.294027   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:56.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.293744   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.293822   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.794034   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.794429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:58.294164   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.294240   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.294551   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:58.294605   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:58.794324   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.794395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.794640   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.293351   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.293426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.293726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.793529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:00.301671   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.301760   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.302092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:00.302138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:00.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.293581   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.293683   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.294068   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.293633   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.293968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.793760   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.793866   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.794174   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:02.794228   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:03.293986   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.294063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.296865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1202 19:24:03.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.793994   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.293692   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.293763   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.793833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:05.293536   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.293614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:05.294030   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:05.793675   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.794044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.293762   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.293838   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.794391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:07.294030   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.294116   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.298234   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1202 19:24:07.301805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:07.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.794025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:08.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:24:08.293509   40272 node_ready.go:38] duration metric: took 6m0.000285031s for node "functional-374330" to be "Ready" ...
	I1202 19:24:08.296878   40272 out.go:203] 
	W1202 19:24:08.299748   40272 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:24:08.299768   40272 out.go:285] * 
	W1202 19:24:08.301915   40272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:24:08.304698   40272 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.908352277Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a5dfc978-249b-4528-9b21-d3c4ee472325 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931233039Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931390419Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931446171Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.970158853Z" level=info msg="Checking image status: minikube-local-cache-test:functional-374330" id=0b5368ba-8f6d-4e19-906a-14804a93f070 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.993766829Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-374330" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.99391506Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-374330 not found" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.993958694Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-374330 found" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.017877187Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-374330" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.018074967Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-374330 not found" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.018119906Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-374330 found" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.811087426Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0a6bce0d-0ca4-4958-96ca-78901794ebdd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.124897937Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.125029421Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.125068042Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.739571519Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.739703283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.73976747Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.763152666Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.763304359Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.7633406Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787265386Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787441949Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787498095Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:20 functional-374330 crio[6021]: time="2025-12-02T19:24:20.32401712Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3126e1a1-7a3d-4dfc-8d4b-cf9d8bcb12d4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:24:21.800455   10033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:21.801018   10033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:21.802672   10033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:21.803272   10033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:21.804836   10033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:24:21 up  1:06,  0 user,  load average: 0.10, 0.21, 0.32
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:24:19 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 02 19:24:20 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:20 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:20 functional-374330 kubelet[9906]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:20 functional-374330 kubelet[9906]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:20 functional-374330 kubelet[9906]: E1202 19:24:20.083270    9906 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 02 19:24:20 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:20 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:20 functional-374330 kubelet[9936]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:20 functional-374330 kubelet[9936]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:20 functional-374330 kubelet[9936]: E1202 19:24:20.855470    9936 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:20 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:21 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 02 19:24:21 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:21 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:21 functional-374330 kubelet[9978]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:21 functional-374330 kubelet[9978]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:21 functional-374330 kubelet[9978]: E1202 19:24:21.603517    9978 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:21 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:21 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (349.497112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-374330 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-374330 get pods: exit status 1 (103.674306ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-374330 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (301.13149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 logs -n 25: (1.014894779s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image ls --format yaml --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                            │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                              │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:latest                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add minikube-local-cache-test:functional-374330                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache delete minikube-local-cache-test:functional-374330                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl images                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ cache          │ functional-374330 cache reload                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ kubectl        │ functional-374330 kubectl -- --context functional-374330 get pods                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:18:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:18:02.458749   40272 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:18:02.458868   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.458880   40272 out.go:374] Setting ErrFile to fd 2...
	I1202 19:18:02.458886   40272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:18:02.459160   40272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:18:02.459549   40272 out.go:368] Setting JSON to false
	I1202 19:18:02.460340   40272 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3621,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:18:02.460405   40272 start.go:143] virtualization:  
	I1202 19:18:02.464020   40272 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:18:02.467892   40272 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:18:02.467969   40272 notify.go:221] Checking for updates...
	I1202 19:18:02.474021   40272 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:18:02.477064   40272 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:02.480130   40272 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:18:02.483164   40272 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:18:02.486142   40272 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:18:02.489587   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:02.489732   40272 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:18:02.527318   40272 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:18:02.527492   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.584790   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.575369586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.584902   40272 docker.go:319] overlay module found
	I1202 19:18:02.588038   40272 out.go:179] * Using the docker driver based on existing profile
	I1202 19:18:02.590861   40272 start.go:309] selected driver: docker
	I1202 19:18:02.590885   40272 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.591008   40272 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:18:02.591102   40272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:18:02.644457   40272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:18:02.635623623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:18:02.644867   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:02.644933   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:02.644976   40272 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:02.648222   40272 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:18:02.651050   40272 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:18:02.654072   40272 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:18:02.657154   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:02.657223   40272 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:18:02.676274   40272 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:18:02.676298   40272 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:18:02.730421   40272 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:18:02.934277   40272 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:18:02.934463   40272 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:18:02.934535   40272 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934623   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:18:02.934634   40272 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.203µs
	I1202 19:18:02.934648   40272 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:18:02.934660   40272 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934690   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:18:02.934695   40272 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.324µs
	I1202 19:18:02.934701   40272 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934707   40272 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:18:02.934711   40272 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934738   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:18:02.934736   40272 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934743   40272 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.525µs
	I1202 19:18:02.934750   40272 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934759   40272 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934774   40272 start.go:364] duration metric: took 25.468µs to acquireMachinesLock for "functional-374330"
	I1202 19:18:02.934787   40272 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:18:02.934789   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:18:02.934792   40272 fix.go:54] fixHost starting: 
	I1202 19:18:02.934794   40272 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 35.864µs
	I1202 19:18:02.934800   40272 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934809   40272 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934834   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:18:02.934845   40272 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 19:18:02.934851   40272 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:18:02.934859   40272 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934885   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:18:02.934890   40272 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.983µs
	I1202 19:18:02.934895   40272 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:18:02.934913   40272 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934941   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:18:02.934946   40272 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.707µs
	I1202 19:18:02.934951   40272 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:18:02.934960   40272 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:18:02.934985   40272 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:18:02.934990   40272 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.646µs
	I1202 19:18:02.934995   40272 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:18:02.935015   40272 cache.go:87] Successfully saved all images to host disk.
	I1202 19:18:02.935074   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:02.953213   40272 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:18:02.953249   40272 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:18:02.956557   40272 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:18:02.956597   40272 machine.go:94] provisionDockerMachine start ...
	I1202 19:18:02.956677   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:02.973977   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:02.974301   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:02.974316   40272 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:18:03.125393   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.125419   40272 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:18:03.125485   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.143103   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.143432   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.143449   40272 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:18:03.303153   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:18:03.303231   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.322823   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.323149   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.323170   40272 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:18:03.473999   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:18:03.474027   40272 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:18:03.474048   40272 ubuntu.go:190] setting up certificates
	I1202 19:18:03.474072   40272 provision.go:84] configureAuth start
	I1202 19:18:03.474137   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:03.492443   40272 provision.go:143] copyHostCerts
	I1202 19:18:03.492497   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492535   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:18:03.492553   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:18:03.492631   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:18:03.492733   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492755   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:18:03.492763   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:18:03.492791   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:18:03.492852   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492873   40272 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:18:03.492880   40272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:18:03.492905   40272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:18:03.492966   40272 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:18:03.672249   40272 provision.go:177] copyRemoteCerts
	I1202 19:18:03.672315   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:18:03.672360   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.690216   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:03.793601   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:18:03.793730   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:18:03.811690   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:18:03.811788   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:18:03.829853   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:18:03.829937   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:18:03.847063   40272 provision.go:87] duration metric: took 372.963339ms to configureAuth
	I1202 19:18:03.847135   40272 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:18:03.847323   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:03.847434   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:03.865504   40272 main.go:143] libmachine: Using SSH client type: native
	I1202 19:18:03.865829   40272 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:18:03.865845   40272 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:18:04.201120   40272 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:18:04.201145   40272 machine.go:97] duration metric: took 1.244539118s to provisionDockerMachine
	I1202 19:18:04.201156   40272 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:18:04.201184   40272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:18:04.201288   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:18:04.201334   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.219464   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.321684   40272 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:18:04.325089   40272 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 19:18:04.325149   40272 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 19:18:04.325168   40272 command_runner.go:130] > VERSION_ID="12"
	I1202 19:18:04.325186   40272 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 19:18:04.325207   40272 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 19:18:04.325237   40272 command_runner.go:130] > ID=debian
	I1202 19:18:04.325255   40272 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 19:18:04.325286   40272 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 19:18:04.325319   40272 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 19:18:04.325987   40272 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:18:04.326040   40272 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:18:04.326062   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:18:04.326146   40272 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:18:04.326256   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:18:04.326282   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:18:04.326394   40272 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:18:04.326431   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> /etc/test/nested/copy/4470/hosts
	I1202 19:18:04.326515   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:18:04.334852   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:04.354617   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:18:04.371951   40272 start.go:296] duration metric: took 170.764596ms for postStartSetup
	I1202 19:18:04.372028   40272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:18:04.372100   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.388603   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.485826   40272 command_runner.go:130] > 12%
	I1202 19:18:04.486229   40272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:18:04.490474   40272 command_runner.go:130] > 172G
	I1202 19:18:04.490820   40272 fix.go:56] duration metric: took 1.556023913s for fixHost
	I1202 19:18:04.490841   40272 start.go:83] releasing machines lock for "functional-374330", held for 1.55605912s
	I1202 19:18:04.490913   40272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:18:04.507171   40272 ssh_runner.go:195] Run: cat /version.json
	I1202 19:18:04.507212   40272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:18:04.507223   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.507284   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:04.524406   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.524835   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:04.718816   40272 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 19:18:04.718877   40272 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 19:18:04.719015   40272 ssh_runner.go:195] Run: systemctl --version
	I1202 19:18:04.724818   40272 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 19:18:04.724852   40272 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 19:18:04.725306   40272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:18:04.761633   40272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 19:18:04.765941   40272 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 19:18:04.765984   40272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:18:04.766036   40272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:18:04.775671   40272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:18:04.775697   40272 start.go:496] detecting cgroup driver to use...
	I1202 19:18:04.775733   40272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:18:04.775798   40272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:18:04.790690   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:18:04.805178   40272 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:18:04.805246   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:18:04.821173   40272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:18:04.835737   40272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:18:04.950984   40272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:18:05.087151   40272 docker.go:234] disabling docker service ...
	I1202 19:18:05.087235   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:18:05.103857   40272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:18:05.118486   40272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:18:05.244193   40272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:18:05.357860   40272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:18:05.370494   40272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:18:05.383221   40272 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 19:18:05.384408   40272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:18:05.384504   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.393298   40272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:18:05.393384   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.402265   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.411107   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.420227   40272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:18:05.428585   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.437313   40272 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.445677   40272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.454485   40272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:18:05.461070   40272 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 19:18:05.462061   40272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:18:05.469806   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:05.580364   40272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:18:05.753810   40272 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:18:05.753880   40272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:18:05.759122   40272 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 19:18:05.759148   40272 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 19:18:05.759155   40272 command_runner.go:130] > Device: 0,72	Inode: 1749        Links: 1
	I1202 19:18:05.759163   40272 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:05.759168   40272 command_runner.go:130] > Access: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759176   40272 command_runner.go:130] > Modify: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759183   40272 command_runner.go:130] > Change: 2025-12-02 19:18:05.694155641 +0000
	I1202 19:18:05.759187   40272 command_runner.go:130] >  Birth: -
	I1202 19:18:05.759949   40272 start.go:564] Will wait 60s for crictl version
	I1202 19:18:05.760004   40272 ssh_runner.go:195] Run: which crictl
	I1202 19:18:05.764137   40272 command_runner.go:130] > /usr/local/bin/crictl
	I1202 19:18:05.765127   40272 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:18:05.790594   40272 command_runner.go:130] > Version:  0.1.0
	I1202 19:18:05.790618   40272 command_runner.go:130] > RuntimeName:  cri-o
	I1202 19:18:05.790833   40272 command_runner.go:130] > RuntimeVersion:  1.34.2
	I1202 19:18:05.791045   40272 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 19:18:05.793417   40272 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:18:05.793500   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.827591   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.827617   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.827624   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.827633   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.827640   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.827654   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.827661   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.827671   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.827679   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.827682   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.827686   40272 command_runner.go:130] >      static
	I1202 19:18:05.827702   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.827705   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.827713   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.827719   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.827727   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.827733   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.827740   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.827750   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.827762   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.829485   40272 ssh_runner.go:195] Run: crio --version
	I1202 19:18:05.856217   40272 command_runner.go:130] > crio version 1.34.2
	I1202 19:18:05.856241   40272 command_runner.go:130] >    GitCommit:      84b02b815eded0cd5550f2edf61505eea9bbf074
	I1202 19:18:05.856248   40272 command_runner.go:130] >    GitCommitDate:  2025-11-11T11:43:13Z
	I1202 19:18:05.856254   40272 command_runner.go:130] >    GitTreeState:   dirty
	I1202 19:18:05.856260   40272 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1202 19:18:05.856264   40272 command_runner.go:130] >    GoVersion:      go1.24.6
	I1202 19:18:05.856268   40272 command_runner.go:130] >    Compiler:       gc
	I1202 19:18:05.856272   40272 command_runner.go:130] >    Platform:       linux/arm64
	I1202 19:18:05.856277   40272 command_runner.go:130] >    Linkmode:       static
	I1202 19:18:05.856281   40272 command_runner.go:130] >    BuildTags:
	I1202 19:18:05.856285   40272 command_runner.go:130] >      static
	I1202 19:18:05.856288   40272 command_runner.go:130] >      netgo
	I1202 19:18:05.856292   40272 command_runner.go:130] >      osusergo
	I1202 19:18:05.856297   40272 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1202 19:18:05.856300   40272 command_runner.go:130] >      seccomp
	I1202 19:18:05.856307   40272 command_runner.go:130] >      apparmor
	I1202 19:18:05.856311   40272 command_runner.go:130] >      selinux
	I1202 19:18:05.856315   40272 command_runner.go:130] >    LDFlags:          unknown
	I1202 19:18:05.856333   40272 command_runner.go:130] >    SeccompEnabled:   true
	I1202 19:18:05.856342   40272 command_runner.go:130] >    AppArmorEnabled:  false
	I1202 19:18:05.862922   40272 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:18:05.865574   40272 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:18:05.881617   40272 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:18:05.885365   40272 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 19:18:05.885465   40272 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:18:05.885585   40272 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:18:05.885631   40272 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:18:05.915386   40272 command_runner.go:130] > {
	I1202 19:18:05.915407   40272 command_runner.go:130] >   "images":  [
	I1202 19:18:05.915412   40272 command_runner.go:130] >     {
	I1202 19:18:05.915425   40272 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 19:18:05.915430   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915436   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 19:18:05.915440   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915443   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915458   40272 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1202 19:18:05.915465   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915469   40272 command_runner.go:130] >       "size":  "29035622",
	I1202 19:18:05.915474   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915478   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915484   40272 command_runner.go:130] >     },
	I1202 19:18:05.915487   40272 command_runner.go:130] >     {
	I1202 19:18:05.915494   40272 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 19:18:05.915501   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915507   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 19:18:05.915511   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915523   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915531   40272 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1202 19:18:05.915535   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915542   40272 command_runner.go:130] >       "size":  "74488375",
	I1202 19:18:05.915547   40272 command_runner.go:130] >       "username":  "nonroot",
	I1202 19:18:05.915550   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915553   40272 command_runner.go:130] >     },
	I1202 19:18:05.915562   40272 command_runner.go:130] >     {
	I1202 19:18:05.915572   40272 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 19:18:05.915585   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915590   40272 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 19:18:05.915593   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915597   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915618   40272 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"
	I1202 19:18:05.915626   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915630   40272 command_runner.go:130] >       "size":  "60854229",
	I1202 19:18:05.915634   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915637   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915641   40272 command_runner.go:130] >       },
	I1202 19:18:05.915645   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915652   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915661   40272 command_runner.go:130] >     },
	I1202 19:18:05.915666   40272 command_runner.go:130] >     {
	I1202 19:18:05.915681   40272 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 19:18:05.915686   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915691   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 19:18:05.915697   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915702   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915710   40272 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"
	I1202 19:18:05.915713   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915718   40272 command_runner.go:130] >       "size":  "84947242",
	I1202 19:18:05.915721   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915725   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915728   40272 command_runner.go:130] >       },
	I1202 19:18:05.915736   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915743   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915746   40272 command_runner.go:130] >     },
	I1202 19:18:05.915750   40272 command_runner.go:130] >     {
	I1202 19:18:05.915756   40272 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 19:18:05.915762   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915771   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 19:18:05.915778   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915782   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915790   40272 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"
	I1202 19:18:05.915797   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915805   40272 command_runner.go:130] >       "size":  "72167568",
	I1202 19:18:05.915809   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915813   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915816   40272 command_runner.go:130] >       },
	I1202 19:18:05.915820   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915824   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915828   40272 command_runner.go:130] >     },
	I1202 19:18:05.915831   40272 command_runner.go:130] >     {
	I1202 19:18:05.915841   40272 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 19:18:05.915852   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915858   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 19:18:05.915861   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915866   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915880   40272 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"
	I1202 19:18:05.915883   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915887   40272 command_runner.go:130] >       "size":  "74105124",
	I1202 19:18:05.915891   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915896   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915902   40272 command_runner.go:130] >     },
	I1202 19:18:05.915906   40272 command_runner.go:130] >     {
	I1202 19:18:05.915912   40272 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 19:18:05.915917   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.915925   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 19:18:05.915930   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915934   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.915943   40272 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"
	I1202 19:18:05.915949   40272 command_runner.go:130] >       ],
	I1202 19:18:05.915953   40272 command_runner.go:130] >       "size":  "49819792",
	I1202 19:18:05.915961   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.915968   40272 command_runner.go:130] >         "value":  "0"
	I1202 19:18:05.915972   40272 command_runner.go:130] >       },
	I1202 19:18:05.915976   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.915982   40272 command_runner.go:130] >       "pinned":  false
	I1202 19:18:05.915988   40272 command_runner.go:130] >     },
	I1202 19:18:05.915992   40272 command_runner.go:130] >     {
	I1202 19:18:05.915999   40272 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 19:18:05.916003   40272 command_runner.go:130] >       "repoTags":  [
	I1202 19:18:05.916010   40272 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.916014   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916018   40272 command_runner.go:130] >       "repoDigests":  [
	I1202 19:18:05.916027   40272 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1202 19:18:05.916043   40272 command_runner.go:130] >       ],
	I1202 19:18:05.916046   40272 command_runner.go:130] >       "size":  "517328",
	I1202 19:18:05.916049   40272 command_runner.go:130] >       "uid":  {
	I1202 19:18:05.916054   40272 command_runner.go:130] >         "value":  "65535"
	I1202 19:18:05.916064   40272 command_runner.go:130] >       },
	I1202 19:18:05.916068   40272 command_runner.go:130] >       "username":  "",
	I1202 19:18:05.916072   40272 command_runner.go:130] >       "pinned":  true
	I1202 19:18:05.916075   40272 command_runner.go:130] >     }
	I1202 19:18:05.916078   40272 command_runner.go:130] >   ]
	I1202 19:18:05.916081   40272 command_runner.go:130] > }
	I1202 19:18:05.916221   40272 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:18:05.916234   40272 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:18:05.916241   40272 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:18:05.916331   40272 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:18:05.916421   40272 ssh_runner.go:195] Run: crio config
	I1202 19:18:05.964092   40272 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 19:18:05.964119   40272 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 19:18:05.964127   40272 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 19:18:05.964130   40272 command_runner.go:130] > #
	I1202 19:18:05.964138   40272 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 19:18:05.964149   40272 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 19:18:05.964156   40272 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 19:18:05.964166   40272 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 19:18:05.964176   40272 command_runner.go:130] > # reload'.
	I1202 19:18:05.964182   40272 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 19:18:05.964189   40272 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 19:18:05.964197   40272 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 19:18:05.964204   40272 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 19:18:05.964210   40272 command_runner.go:130] > [crio]
	I1202 19:18:05.964216   40272 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 19:18:05.964223   40272 command_runner.go:130] > # containers images, in this directory.
	I1202 19:18:05.964661   40272 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1202 19:18:05.964681   40272 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 19:18:05.965195   40272 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1202 19:18:05.965213   40272 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 19:18:05.965585   40272 command_runner.go:130] > # imagestore = ""
	I1202 19:18:05.965601   40272 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 19:18:05.965614   40272 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 19:18:05.966162   40272 command_runner.go:130] > # storage_driver = "overlay"
	I1202 19:18:05.966179   40272 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 19:18:05.966186   40272 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 19:18:05.966362   40272 command_runner.go:130] > # storage_option = [
	I1202 19:18:05.966573   40272 command_runner.go:130] > # ]
	I1202 19:18:05.966591   40272 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 19:18:05.966598   40272 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 19:18:05.966880   40272 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 19:18:05.966894   40272 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 19:18:05.966902   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 19:18:05.966914   40272 command_runner.go:130] > # always happen on a node reboot
	I1202 19:18:05.967066   40272 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 19:18:05.967095   40272 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 19:18:05.967102   40272 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 19:18:05.967107   40272 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 19:18:05.967213   40272 command_runner.go:130] > # version_file_persist = ""
	I1202 19:18:05.967225   40272 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 19:18:05.967234   40272 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 19:18:05.967423   40272 command_runner.go:130] > # internal_wipe = true
	I1202 19:18:05.967436   40272 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 19:18:05.967449   40272 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 19:18:05.967580   40272 command_runner.go:130] > # internal_repair = true
	I1202 19:18:05.967590   40272 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 19:18:05.967596   40272 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 19:18:05.967602   40272 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 19:18:05.967753   40272 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 19:18:05.967764   40272 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 19:18:05.967767   40272 command_runner.go:130] > [crio.api]
	I1202 19:18:05.967773   40272 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 19:18:05.967953   40272 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 19:18:05.967969   40272 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 19:18:05.968134   40272 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 19:18:05.968145   40272 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 19:18:05.968169   40272 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 19:18:05.968297   40272 command_runner.go:130] > # stream_port = "0"
	I1202 19:18:05.968307   40272 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 19:18:05.968473   40272 command_runner.go:130] > # stream_enable_tls = false
	I1202 19:18:05.968483   40272 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 19:18:05.968653   40272 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 19:18:05.968663   40272 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 19:18:05.968669   40272 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968775   40272 command_runner.go:130] > # stream_tls_cert = ""
	I1202 19:18:05.968785   40272 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 19:18:05.968792   40272 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1202 19:18:05.968905   40272 command_runner.go:130] > # stream_tls_key = ""
	I1202 19:18:05.968915   40272 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 19:18:05.968922   40272 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 19:18:05.968926   40272 command_runner.go:130] > # automatically pick up the changes.
	I1202 19:18:05.969055   40272 command_runner.go:130] > # stream_tls_ca = ""
	I1202 19:18:05.969084   40272 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969257   40272 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1202 19:18:05.969270   40272 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 19:18:05.969439   40272 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1202 19:18:05.969511   40272 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 19:18:05.969528   40272 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 19:18:05.969532   40272 command_runner.go:130] > [crio.runtime]
	I1202 19:18:05.969539   40272 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 19:18:05.969544   40272 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 19:18:05.969548   40272 command_runner.go:130] > # "nofile=1024:2048"
	I1202 19:18:05.969554   40272 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 19:18:05.969676   40272 command_runner.go:130] > # default_ulimits = [
	I1202 19:18:05.969684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.969691   40272 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 19:18:05.969900   40272 command_runner.go:130] > # no_pivot = false
	I1202 19:18:05.969912   40272 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 19:18:05.969920   40272 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 19:18:05.970109   40272 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 19:18:05.970119   40272 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 19:18:05.970124   40272 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 19:18:05.970131   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970227   40272 command_runner.go:130] > # conmon = ""
	I1202 19:18:05.970236   40272 command_runner.go:130] > # Cgroup setting for conmon
	I1202 19:18:05.970244   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 19:18:05.970379   40272 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 19:18:05.970389   40272 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 19:18:05.970395   40272 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 19:18:05.970403   40272 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 19:18:05.970521   40272 command_runner.go:130] > # conmon_env = [
	I1202 19:18:05.970671   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970681   40272 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 19:18:05.970687   40272 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 19:18:05.970693   40272 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 19:18:05.970697   40272 command_runner.go:130] > # default_env = [
	I1202 19:18:05.970827   40272 command_runner.go:130] > # ]
	I1202 19:18:05.970837   40272 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 19:18:05.970846   40272 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 19:18:05.970995   40272 command_runner.go:130] > # selinux = false
	I1202 19:18:05.971005   40272 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 19:18:05.971014   40272 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1202 19:18:05.971019   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971123   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.971133   40272 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1202 19:18:05.971140   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971283   40272 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1202 19:18:05.971297   40272 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 19:18:05.971349   40272 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 19:18:05.971394   40272 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 19:18:05.971420   40272 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 19:18:05.971426   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.971532   40272 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 19:18:05.971542   40272 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 19:18:05.971554   40272 command_runner.go:130] > # the cgroup blockio controller.
	I1202 19:18:05.971691   40272 command_runner.go:130] > # blockio_config_file = ""
	I1202 19:18:05.971702   40272 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 19:18:05.971706   40272 command_runner.go:130] > # blockio parameters.
	I1202 19:18:05.971888   40272 command_runner.go:130] > # blockio_reload = false
	I1202 19:18:05.971899   40272 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 19:18:05.971911   40272 command_runner.go:130] > # irqbalance daemon.
	I1202 19:18:05.972089   40272 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 19:18:05.972099   40272 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 19:18:05.972107   40272 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 19:18:05.972118   40272 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 19:18:05.972238   40272 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 19:18:05.972249   40272 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 19:18:05.972255   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.972373   40272 command_runner.go:130] > # rdt_config_file = ""
	I1202 19:18:05.972382   40272 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 19:18:05.972510   40272 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 19:18:05.972521   40272 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 19:18:05.972668   40272 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 19:18:05.972679   40272 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 19:18:05.972686   40272 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 19:18:05.972689   40272 command_runner.go:130] > # will be added.
	I1202 19:18:05.972804   40272 command_runner.go:130] > # default_capabilities = [
	I1202 19:18:05.972909   40272 command_runner.go:130] > # 	"CHOWN",
	I1202 19:18:05.973035   40272 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 19:18:05.973186   40272 command_runner.go:130] > # 	"FSETID",
	I1202 19:18:05.973194   40272 command_runner.go:130] > # 	"FOWNER",
	I1202 19:18:05.973322   40272 command_runner.go:130] > # 	"SETGID",
	I1202 19:18:05.973468   40272 command_runner.go:130] > # 	"SETUID",
	I1202 19:18:05.973500   40272 command_runner.go:130] > # 	"SETPCAP",
	I1202 19:18:05.973632   40272 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 19:18:05.973847   40272 command_runner.go:130] > # 	"KILL",
	I1202 19:18:05.973855   40272 command_runner.go:130] > # ]
	I1202 19:18:05.973864   40272 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 19:18:05.973870   40272 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 19:18:05.974039   40272 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 19:18:05.974052   40272 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 19:18:05.974059   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974062   40272 command_runner.go:130] > default_sysctls = [
	I1202 19:18:05.974148   40272 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 19:18:05.974179   40272 command_runner.go:130] > ]
	I1202 19:18:05.974185   40272 command_runner.go:130] > # List of devices on the host that a
	I1202 19:18:05.974297   40272 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 19:18:05.974459   40272 command_runner.go:130] > # allowed_devices = [
	I1202 19:18:05.974492   40272 command_runner.go:130] > # 	"/dev/fuse",
	I1202 19:18:05.974497   40272 command_runner.go:130] > # 	"/dev/net/tun",
	I1202 19:18:05.974500   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974505   40272 command_runner.go:130] > # List of additional devices. specified as
	I1202 19:18:05.974517   40272 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 19:18:05.974706   40272 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 19:18:05.974717   40272 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 19:18:05.974722   40272 command_runner.go:130] > # additional_devices = [
	I1202 19:18:05.974730   40272 command_runner.go:130] > # ]
	I1202 19:18:05.974735   40272 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 19:18:05.974870   40272 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 19:18:05.975061   40272 command_runner.go:130] > # 	"/etc/cdi",
	I1202 19:18:05.975069   40272 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 19:18:05.975204   40272 command_runner.go:130] > # ]
	I1202 19:18:05.975337   40272 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 19:18:05.975610   40272 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 19:18:05.975708   40272 command_runner.go:130] > # Defaults to false.
	I1202 19:18:05.975730   40272 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 19:18:05.975766   40272 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 19:18:05.975927   40272 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 19:18:05.976135   40272 command_runner.go:130] > # hooks_dir = [
	I1202 19:18:05.976173   40272 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 19:18:05.976199   40272 command_runner.go:130] > # ]
	I1202 19:18:05.976222   40272 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 19:18:05.976257   40272 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 19:18:05.976344   40272 command_runner.go:130] > # its default mounts from the following two files:
	I1202 19:18:05.976363   40272 command_runner.go:130] > #
	I1202 19:18:05.976438   40272 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 19:18:05.976465   40272 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 19:18:05.976485   40272 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 19:18:05.976561   40272 command_runner.go:130] > #
	I1202 19:18:05.976637   40272 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 19:18:05.976658   40272 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 19:18:05.976681   40272 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 19:18:05.976711   40272 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 19:18:05.976797   40272 command_runner.go:130] > #
	I1202 19:18:05.976852   40272 command_runner.go:130] > # default_mounts_file = ""
	I1202 19:18:05.976886   40272 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 19:18:05.976912   40272 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 19:18:05.976930   40272 command_runner.go:130] > # pids_limit = -1
	I1202 19:18:05.977014   40272 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 19:18:05.977040   40272 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 19:18:05.977112   40272 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 19:18:05.977136   40272 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 19:18:05.977153   40272 command_runner.go:130] > # log_size_max = -1
	I1202 19:18:05.977240   40272 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 19:18:05.977264   40272 command_runner.go:130] > # log_to_journald = false
	I1202 19:18:05.977344   40272 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 19:18:05.977370   40272 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 19:18:05.977390   40272 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 19:18:05.977478   40272 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 19:18:05.977500   40272 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 19:18:05.977570   40272 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 19:18:05.977596   40272 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 19:18:05.977614   40272 command_runner.go:130] > # read_only = false
	I1202 19:18:05.977722   40272 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 19:18:05.977797   40272 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 19:18:05.977817   40272 command_runner.go:130] > # live configuration reload.
	I1202 19:18:05.977836   40272 command_runner.go:130] > # log_level = "info"
	I1202 19:18:05.977872   40272 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 19:18:05.977956   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.978011   40272 command_runner.go:130] > # log_filter = ""
	I1202 19:18:05.978051   40272 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978073   40272 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 19:18:05.978093   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978128   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978214   40272 command_runner.go:130] > # uid_mappings = ""
	I1202 19:18:05.978236   40272 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 19:18:05.978257   40272 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 19:18:05.978338   40272 command_runner.go:130] > # separated by comma.
	I1202 19:18:05.978377   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978397   40272 command_runner.go:130] > # gid_mappings = ""
	I1202 19:18:05.978483   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 19:18:05.978556   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978583   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978606   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978700   40272 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 19:18:05.978728   40272 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 19:18:05.978805   40272 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 19:18:05.978827   40272 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 19:18:05.978909   40272 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 19:18:05.978941   40272 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 19:18:05.979022   40272 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 19:18:05.979049   40272 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 19:18:05.979139   40272 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 19:18:05.979164   40272 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 19:18:05.979239   40272 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 19:18:05.979264   40272 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 19:18:05.979291   40272 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 19:18:05.979376   40272 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 19:18:05.979411   40272 command_runner.go:130] > # drop_infra_ctr = true
	I1202 19:18:05.979493   40272 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 19:18:05.979517   40272 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 19:18:05.979541   40272 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 19:18:05.979625   40272 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 19:18:05.979649   40272 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 19:18:05.979723   40272 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 19:18:05.979744   40272 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 19:18:05.979763   40272 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 19:18:05.979845   40272 command_runner.go:130] > # shared_cpuset = ""
	I1202 19:18:05.979867   40272 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 19:18:05.979937   40272 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 19:18:05.979961   40272 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 19:18:05.979983   40272 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 19:18:05.980069   40272 command_runner.go:130] > # pinns_path = ""
	I1202 19:18:05.980091   40272 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 19:18:05.980113   40272 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 19:18:05.980205   40272 command_runner.go:130] > # enable_criu_support = true
	I1202 19:18:05.980225   40272 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 19:18:05.980246   40272 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 19:18:05.980337   40272 command_runner.go:130] > # enable_pod_events = false
	I1202 19:18:05.980364   40272 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 19:18:05.980435   40272 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 19:18:05.980456   40272 command_runner.go:130] > # default_runtime = "crun"
	I1202 19:18:05.980476   40272 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 19:18:05.980567   40272 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 19:18:05.980641   40272 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 19:18:05.980666   40272 command_runner.go:130] > # creation as a file is not desired either.
	I1202 19:18:05.980689   40272 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 19:18:05.980782   40272 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 19:18:05.980807   40272 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 19:18:05.980885   40272 command_runner.go:130] > # ]
	I1202 19:18:05.980907   40272 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 19:18:05.980989   40272 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 19:18:05.981060   40272 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 19:18:05.981080   40272 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 19:18:05.981155   40272 command_runner.go:130] > #
	I1202 19:18:05.981180   40272 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 19:18:05.981237   40272 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 19:18:05.981273   40272 command_runner.go:130] > # runtime_type = "oci"
	I1202 19:18:05.981291   40272 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 19:18:05.981311   40272 command_runner.go:130] > # inherit_default_runtime = false
	I1202 19:18:05.981423   40272 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 19:18:05.981442   40272 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 19:18:05.981461   40272 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 19:18:05.981479   40272 command_runner.go:130] > # monitor_env = []
	I1202 19:18:05.981507   40272 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 19:18:05.981530   40272 command_runner.go:130] > # allowed_annotations = []
	I1202 19:18:05.981553   40272 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 19:18:05.981571   40272 command_runner.go:130] > # no_sync_log = false
	I1202 19:18:05.981591   40272 command_runner.go:130] > # default_annotations = {}
	I1202 19:18:05.981620   40272 command_runner.go:130] > # stream_websockets = false
	I1202 19:18:05.981644   40272 command_runner.go:130] > # seccomp_profile = ""
	I1202 19:18:05.981733   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.981765   40272 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 19:18:05.981785   40272 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 19:18:05.981807   40272 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 19:18:05.981914   40272 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 19:18:05.981934   40272 command_runner.go:130] > #   in $PATH.
	I1202 19:18:05.981954   40272 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 19:18:05.981989   40272 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 19:18:05.982017   40272 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 19:18:05.982034   40272 command_runner.go:130] > #   state.
	I1202 19:18:05.982057   40272 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 19:18:05.982098   40272 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 19:18:05.982128   40272 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1202 19:18:05.982148   40272 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1202 19:18:05.982168   40272 command_runner.go:130] > #   the values from the default runtime on load time.
	I1202 19:18:05.982199   40272 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 19:18:05.982235   40272 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 19:18:05.982255   40272 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 19:18:05.982277   40272 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 19:18:05.982307   40272 command_runner.go:130] > #   The currently recognized values are:
	I1202 19:18:05.982329   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 19:18:05.983678   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 19:18:05.983703   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 19:18:05.983795   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 19:18:05.983829   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 19:18:05.983905   40272 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 19:18:05.983938   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 19:18:05.983958   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 19:18:05.983978   40272 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 19:18:05.984011   40272 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1202 19:18:05.984040   40272 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1202 19:18:05.984061   40272 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1202 19:18:05.984082   40272 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1202 19:18:05.984114   40272 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1202 19:18:05.984143   40272 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1202 19:18:05.984168   40272 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1202 19:18:05.984191   40272 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 19:18:05.984220   40272 command_runner.go:130] > #   deprecated option "conmon".
	I1202 19:18:05.984244   40272 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 19:18:05.984265   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 19:18:05.984298   40272 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 19:18:05.984320   40272 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 19:18:05.984343   40272 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1202 19:18:05.984373   40272 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 19:18:05.984413   40272 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1202 19:18:05.984432   40272 command_runner.go:130] > #   conmon-rs by using:
	I1202 19:18:05.984470   40272 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1202 19:18:05.984495   40272 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1202 19:18:05.984515   40272 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1202 19:18:05.984549   40272 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 19:18:05.984571   40272 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 19:18:05.984595   40272 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1202 19:18:05.984630   40272 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1202 19:18:05.984653   40272 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1202 19:18:05.984677   40272 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1202 19:18:05.984716   40272 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1202 19:18:05.984737   40272 command_runner.go:130] > #   when a machine crash happens.
	I1202 19:18:05.984765   40272 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1202 19:18:05.984801   40272 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1202 19:18:05.984825   40272 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1202 19:18:05.984846   40272 command_runner.go:130] > #   seccomp profile for the runtime.
	I1202 19:18:05.984877   40272 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1202 19:18:05.984902   40272 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1202 19:18:05.984921   40272 command_runner.go:130] > #
	I1202 19:18:05.984958   40272 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 19:18:05.984976   40272 command_runner.go:130] > #
	I1202 19:18:05.984996   40272 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 19:18:05.985026   40272 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 19:18:05.985052   40272 command_runner.go:130] > #
	I1202 19:18:05.985075   40272 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 19:18:05.985099   40272 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 19:18:05.985125   40272 command_runner.go:130] > #
	I1202 19:18:05.985149   40272 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 19:18:05.985169   40272 command_runner.go:130] > # feature.
	I1202 19:18:05.985199   40272 command_runner.go:130] > #
	I1202 19:18:05.985224   40272 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 19:18:05.985244   40272 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 19:18:05.985274   40272 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 19:18:05.985304   40272 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 19:18:05.985329   40272 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 19:18:05.985349   40272 command_runner.go:130] > #
	I1202 19:18:05.985381   40272 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 19:18:05.985404   40272 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 19:18:05.985422   40272 command_runner.go:130] > #
	I1202 19:18:05.985454   40272 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 19:18:05.985482   40272 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 19:18:05.985497   40272 command_runner.go:130] > #
	I1202 19:18:05.985518   40272 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 19:18:05.985550   40272 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 19:18:05.985582   40272 command_runner.go:130] > # limitation.
	I1202 19:18:05.985602   40272 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1202 19:18:05.985622   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1202 19:18:05.985670   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985689   40272 command_runner.go:130] > runtime_root = "/run/crun"
	I1202 19:18:05.985704   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985709   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985725   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985731   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985741   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985745   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985749   40272 command_runner.go:130] > allowed_annotations = [
	I1202 19:18:05.985754   40272 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1202 19:18:05.985759   40272 command_runner.go:130] > ]
	I1202 19:18:05.985765   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985769   40272 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 19:18:05.985782   40272 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1202 19:18:05.985786   40272 command_runner.go:130] > runtime_type = ""
	I1202 19:18:05.985795   40272 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 19:18:05.985801   40272 command_runner.go:130] > inherit_default_runtime = false
	I1202 19:18:05.985810   40272 command_runner.go:130] > runtime_config_path = ""
	I1202 19:18:05.985821   40272 command_runner.go:130] > container_min_memory = ""
	I1202 19:18:05.985829   40272 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 19:18:05.985833   40272 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 19:18:05.985837   40272 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 19:18:05.985845   40272 command_runner.go:130] > privileged_without_host_devices = false
	I1202 19:18:05.985852   40272 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 19:18:05.985860   40272 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 19:18:05.985867   40272 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 19:18:05.985881   40272 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 19:18:05.985892   40272 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1202 19:18:05.985905   40272 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1202 19:18:05.985915   40272 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1202 19:18:05.985926   40272 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 19:18:05.985936   40272 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 19:18:05.985947   40272 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 19:18:05.985953   40272 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 19:18:05.985964   40272 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 19:18:05.985968   40272 command_runner.go:130] > # Example:
	I1202 19:18:05.985975   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 19:18:05.985980   40272 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 19:18:05.985987   40272 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 19:18:05.985993   40272 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 19:18:05.985996   40272 command_runner.go:130] > # cpuset = "0-1"
	I1202 19:18:05.986000   40272 command_runner.go:130] > # cpushares = "5"
	I1202 19:18:05.986007   40272 command_runner.go:130] > # cpuquota = "1000"
	I1202 19:18:05.986011   40272 command_runner.go:130] > # cpuperiod = "100000"
	I1202 19:18:05.986014   40272 command_runner.go:130] > # cpulimit = "35"
	I1202 19:18:05.986018   40272 command_runner.go:130] > # Where:
	I1202 19:18:05.986025   40272 command_runner.go:130] > # The workload name is workload-type.
	I1202 19:18:05.986033   40272 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 19:18:05.986041   40272 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 19:18:05.986047   40272 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 19:18:05.986057   40272 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 19:18:05.986069   40272 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 19:18:05.986075   40272 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 19:18:05.986082   40272 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 19:18:05.986086   40272 command_runner.go:130] > # Default value is set to true
	I1202 19:18:05.986096   40272 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 19:18:05.986102   40272 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 19:18:05.986107   40272 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 19:18:05.986117   40272 command_runner.go:130] > # Default value is set to 'false'
	I1202 19:18:05.986121   40272 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 19:18:05.986127   40272 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1202 19:18:05.986137   40272 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1202 19:18:05.986142   40272 command_runner.go:130] > # timezone = ""
	I1202 19:18:05.986151   40272 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 19:18:05.986154   40272 command_runner.go:130] > #
	I1202 19:18:05.986160   40272 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 19:18:05.986171   40272 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1202 19:18:05.986178   40272 command_runner.go:130] > [crio.image]
	I1202 19:18:05.986184   40272 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 19:18:05.986189   40272 command_runner.go:130] > # default_transport = "docker://"
	I1202 19:18:05.986197   40272 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 19:18:05.986205   40272 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986212   40272 command_runner.go:130] > # global_auth_file = ""
	I1202 19:18:05.986217   40272 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 19:18:05.986223   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986230   40272 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1202 19:18:05.986237   40272 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 19:18:05.986243   40272 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 19:18:05.986248   40272 command_runner.go:130] > # This option supports live configuration reload.
	I1202 19:18:05.986255   40272 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 19:18:05.986260   40272 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 19:18:05.986266   40272 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 19:18:05.986275   40272 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 19:18:05.986281   40272 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 19:18:05.986291   40272 command_runner.go:130] > # pause_command = "/pause"
	I1202 19:18:05.986301   40272 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 19:18:05.986309   40272 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 19:18:05.986319   40272 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 19:18:05.986324   40272 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 19:18:05.986331   40272 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 19:18:05.986337   40272 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 19:18:05.986343   40272 command_runner.go:130] > # pinned_images = [
	I1202 19:18:05.986346   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986352   40272 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 19:18:05.986360   40272 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 19:18:05.986367   40272 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 19:18:05.986376   40272 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 19:18:05.986381   40272 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 19:18:05.986388   40272 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1202 19:18:05.986394   40272 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 19:18:05.986401   40272 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 19:18:05.986415   40272 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 19:18:05.986422   40272 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 19:18:05.986431   40272 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 19:18:05.986436   40272 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 19:18:05.986442   40272 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 19:18:05.986452   40272 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 19:18:05.986456   40272 command_runner.go:130] > # changing them here.
	I1202 19:18:05.986462   40272 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1202 19:18:05.986468   40272 command_runner.go:130] > # insecure_registries = [
	I1202 19:18:05.986472   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986478   40272 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 19:18:05.986486   40272 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 19:18:05.986490   40272 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 19:18:05.986495   40272 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 19:18:05.986499   40272 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 19:18:05.986505   40272 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1202 19:18:05.986518   40272 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1202 19:18:05.986525   40272 command_runner.go:130] > # auto_reload_registries = false
	I1202 19:18:05.986531   40272 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1202 19:18:05.986543   40272 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1202 19:18:05.986549   40272 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1202 19:18:05.986556   40272 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1202 19:18:05.986561   40272 command_runner.go:130] > # The mode of short name resolution.
	I1202 19:18:05.986568   40272 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1202 19:18:05.986578   40272 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1202 19:18:05.986583   40272 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1202 19:18:05.986588   40272 command_runner.go:130] > # short_name_mode = "enforcing"
	I1202 19:18:05.986593   40272 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1202 19:18:05.986602   40272 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1202 19:18:05.986606   40272 command_runner.go:130] > # oci_artifact_mount_support = true
	I1202 19:18:05.986612   40272 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 19:18:05.986619   40272 command_runner.go:130] > # CNI plugins.
	I1202 19:18:05.986623   40272 command_runner.go:130] > [crio.network]
	I1202 19:18:05.986629   40272 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 19:18:05.986637   40272 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 19:18:05.986640   40272 command_runner.go:130] > # cni_default_network = ""
	I1202 19:18:05.986646   40272 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 19:18:05.986655   40272 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 19:18:05.986661   40272 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 19:18:05.986664   40272 command_runner.go:130] > # plugin_dirs = [
	I1202 19:18:05.986668   40272 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 19:18:05.986674   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986678   40272 command_runner.go:130] > # List of included pod metrics.
	I1202 19:18:05.986681   40272 command_runner.go:130] > # included_pod_metrics = [
	I1202 19:18:05.986684   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986690   40272 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 19:18:05.986696   40272 command_runner.go:130] > [crio.metrics]
	I1202 19:18:05.986701   40272 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 19:18:05.986705   40272 command_runner.go:130] > # enable_metrics = false
	I1202 19:18:05.986718   40272 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 19:18:05.986723   40272 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 19:18:05.986732   40272 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 19:18:05.986738   40272 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 19:18:05.986744   40272 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 19:18:05.986748   40272 command_runner.go:130] > # metrics_collectors = [
	I1202 19:18:05.986753   40272 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 19:18:05.986760   40272 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 19:18:05.986764   40272 command_runner.go:130] > # 	"containers_oom_total",
	I1202 19:18:05.986768   40272 command_runner.go:130] > # 	"processes_defunct",
	I1202 19:18:05.986777   40272 command_runner.go:130] > # 	"operations_total",
	I1202 19:18:05.986782   40272 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 19:18:05.986787   40272 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 19:18:05.986793   40272 command_runner.go:130] > # 	"operations_errors_total",
	I1202 19:18:05.986797   40272 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 19:18:05.986802   40272 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 19:18:05.986809   40272 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 19:18:05.986814   40272 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 19:18:05.986819   40272 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 19:18:05.986823   40272 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 19:18:05.986829   40272 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 19:18:05.986836   40272 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 19:18:05.986840   40272 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1202 19:18:05.986844   40272 command_runner.go:130] > # ]
	I1202 19:18:05.986852   40272 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1202 19:18:05.986862   40272 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1202 19:18:05.986870   40272 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 19:18:05.986877   40272 command_runner.go:130] > # metrics_port = 9090
	I1202 19:18:05.986882   40272 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 19:18:05.986886   40272 command_runner.go:130] > # metrics_socket = ""
	I1202 19:18:05.986893   40272 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 19:18:05.986899   40272 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 19:18:05.986906   40272 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 19:18:05.986918   40272 command_runner.go:130] > # certificate on any modification event.
	I1202 19:18:05.986933   40272 command_runner.go:130] > # metrics_cert = ""
	I1202 19:18:05.986939   40272 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 19:18:05.986947   40272 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 19:18:05.986950   40272 command_runner.go:130] > # metrics_key = ""
	I1202 19:18:05.986956   40272 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 19:18:05.986962   40272 command_runner.go:130] > [crio.tracing]
	I1202 19:18:05.986967   40272 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 19:18:05.986972   40272 command_runner.go:130] > # enable_tracing = false
	I1202 19:18:05.986979   40272 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 19:18:05.986984   40272 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1202 19:18:05.986990   40272 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 19:18:05.986997   40272 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 19:18:05.987001   40272 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 19:18:05.987007   40272 command_runner.go:130] > [crio.nri]
	I1202 19:18:05.987011   40272 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 19:18:05.987015   40272 command_runner.go:130] > # enable_nri = true
	I1202 19:18:05.987019   40272 command_runner.go:130] > # NRI socket to listen on.
	I1202 19:18:05.987029   40272 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 19:18:05.987033   40272 command_runner.go:130] > # NRI plugin directory to use.
	I1202 19:18:05.987037   40272 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 19:18:05.987045   40272 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 19:18:05.987050   40272 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 19:18:05.987056   40272 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 19:18:05.987116   40272 command_runner.go:130] > # nri_disable_connections = false
	I1202 19:18:05.987126   40272 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 19:18:05.987130   40272 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 19:18:05.987136   40272 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 19:18:05.987142   40272 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 19:18:05.987147   40272 command_runner.go:130] > # NRI default validator configuration.
	I1202 19:18:05.987157   40272 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1202 19:18:05.987166   40272 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1202 19:18:05.987170   40272 command_runner.go:130] > # can be restricted/rejected:
	I1202 19:18:05.987178   40272 command_runner.go:130] > # - OCI hook injection
	I1202 19:18:05.987186   40272 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1202 19:18:05.987191   40272 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1202 19:18:05.987196   40272 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1202 19:18:05.987203   40272 command_runner.go:130] > # - adjustment of linux namespaces
	I1202 19:18:05.987209   40272 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1202 19:18:05.987216   40272 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1202 19:18:05.987225   40272 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1202 19:18:05.987230   40272 command_runner.go:130] > #
	I1202 19:18:05.987234   40272 command_runner.go:130] > # [crio.nri.default_validator]
	I1202 19:18:05.987239   40272 command_runner.go:130] > # nri_enable_default_validator = false
	I1202 19:18:05.987245   40272 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1202 19:18:05.987254   40272 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1202 19:18:05.987260   40272 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1202 19:18:05.987268   40272 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1202 19:18:05.987279   40272 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1202 19:18:05.987283   40272 command_runner.go:130] > # nri_validator_required_plugins = [
	I1202 19:18:05.987286   40272 command_runner.go:130] > # ]
	I1202 19:18:05.987291   40272 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1202 19:18:05.987299   40272 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 19:18:05.987302   40272 command_runner.go:130] > [crio.stats]
	I1202 19:18:05.987308   40272 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 19:18:05.987316   40272 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 19:18:05.987320   40272 command_runner.go:130] > # stats_collection_period = 0
	I1202 19:18:05.987326   40272 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1202 19:18:05.987334   40272 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1202 19:18:05.987344   40272 command_runner.go:130] > # collection_period = 0
	I1202 19:18:05.987392   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941536561Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1202 19:18:05.987405   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941573139Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1202 19:18:05.987421   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941598771Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1202 19:18:05.987431   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.941629007Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1202 19:18:05.987447   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.94184771Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:18:05.987460   40272 command_runner.go:130] ! time="2025-12-02T19:18:05.942236436Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1202 19:18:05.987477   40272 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 19:18:05.987606   40272 cni.go:84] Creating CNI manager for ""
	I1202 19:18:05.987620   40272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:18:05.987644   40272 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:18:05.987670   40272 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:18:05.987799   40272 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:18:05.987877   40272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:18:05.995250   40272 command_runner.go:130] > kubeadm
	I1202 19:18:05.995271   40272 command_runner.go:130] > kubectl
	I1202 19:18:05.995276   40272 command_runner.go:130] > kubelet
	I1202 19:18:05.995308   40272 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:18:05.995379   40272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:18:06.002605   40272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:18:06.015240   40272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:18:06.033933   40272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1202 19:18:06.047469   40272 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:18:06.051453   40272 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 19:18:06.051580   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:06.161840   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:06.543709   40272 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:18:06.543774   40272 certs.go:195] generating shared ca certs ...
	I1202 19:18:06.543803   40272 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:06.543968   40272 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:18:06.544037   40272 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:18:06.544058   40272 certs.go:257] generating profile certs ...
	I1202 19:18:06.544203   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:18:06.544311   40272 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:18:06.544381   40272 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:18:06.544424   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:18:06.544458   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:18:06.544493   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:18:06.544537   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:18:06.544570   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:18:06.544599   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:18:06.544648   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:18:06.544683   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:18:06.544773   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:18:06.544828   40272 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:18:06.544854   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:18:06.544932   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:18:06.551062   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:18:06.551141   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:18:06.551220   40272 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:18:06.551261   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.551291   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.551312   40272 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.552213   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:18:06.569384   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:18:06.587883   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:18:06.609527   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:18:06.628039   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:18:06.644623   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:18:06.662478   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:18:06.679440   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:18:06.696330   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:18:06.713584   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:18:06.731033   40272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:18:06.747714   40272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:18:06.761265   40272 ssh_runner.go:195] Run: openssl version
	I1202 19:18:06.766652   40272 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 19:18:06.767017   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:18:06.774639   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.777834   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778051   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.778107   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:18:06.818127   40272 command_runner.go:130] > b5213941
	I1202 19:18:06.818625   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:18:06.826391   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:18:06.834719   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838324   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838367   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.838418   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:18:06.878978   40272 command_runner.go:130] > 51391683
	I1202 19:18:06.879420   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:18:06.887230   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:18:06.895470   40272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899261   40272 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899287   40272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.899335   40272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:18:06.940199   40272 command_runner.go:130] > 3ec20f2e
	I1202 19:18:06.940694   40272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:18:06.948359   40272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951793   40272 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:18:06.951816   40272 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 19:18:06.951822   40272 command_runner.go:130] > Device: 259,1	Inode: 1315539     Links: 1
	I1202 19:18:06.951851   40272 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 19:18:06.951865   40272 command_runner.go:130] > Access: 2025-12-02 19:13:58.595474405 +0000
	I1202 19:18:06.951871   40272 command_runner.go:130] > Modify: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951876   40272 command_runner.go:130] > Change: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951881   40272 command_runner.go:130] >  Birth: 2025-12-02 19:09:54.356903009 +0000
	I1202 19:18:06.951960   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:18:06.996850   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:06.997318   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:18:07.037433   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.037885   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:18:07.078161   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.078666   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:18:07.119364   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.119441   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:18:07.159628   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.160136   40272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:18:07.204176   40272 command_runner.go:130] > Certificate will not expire
	I1202 19:18:07.204662   40272 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:18:07.204768   40272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:18:07.204851   40272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:18:07.233427   40272 cri.go:89] found id: ""
	I1202 19:18:07.233514   40272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:18:07.240330   40272 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 19:18:07.240352   40272 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 19:18:07.240359   40272 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 19:18:07.241346   40272 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:18:07.241363   40272 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:18:07.241437   40272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:18:07.248549   40272 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:18:07.248941   40272 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-374330" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249040   40272 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "functional-374330" cluster setting kubeconfig missing "functional-374330" context setting]
	I1202 19:18:07.249312   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.249749   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.249896   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.250443   40272 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:18:07.250467   40272 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:18:07.250474   40272 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:18:07.250478   40272 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:18:07.250487   40272 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:18:07.250526   40272 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:18:07.250793   40272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:18:07.258519   40272 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:18:07.258557   40272 kubeadm.go:602] duration metric: took 17.188352ms to restartPrimaryControlPlane
	I1202 19:18:07.258569   40272 kubeadm.go:403] duration metric: took 53.913832ms to StartCluster
	I1202 19:18:07.258583   40272 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.258647   40272 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.259281   40272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:18:07.259482   40272 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:18:07.259876   40272 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:18:07.259927   40272 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:18:07.259993   40272 addons.go:70] Setting storage-provisioner=true in profile "functional-374330"
	I1202 19:18:07.260007   40272 addons.go:239] Setting addon storage-provisioner=true in "functional-374330"
	I1202 19:18:07.260034   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.260061   40272 addons.go:70] Setting default-storageclass=true in profile "functional-374330"
	I1202 19:18:07.260107   40272 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-374330"
	I1202 19:18:07.260433   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.260513   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.266365   40272 out.go:179] * Verifying Kubernetes components...
	I1202 19:18:07.269343   40272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:18:07.293348   40272 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:18:07.293507   40272 kapi.go:59] client config for functional-374330: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:18:07.293796   40272 addons.go:239] Setting addon default-storageclass=true in "functional-374330"
	I1202 19:18:07.293827   40272 host.go:66] Checking if "functional-374330" exists ...
	I1202 19:18:07.294253   40272 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:18:07.304761   40272 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:18:07.307700   40272 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.307724   40272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:18:07.307789   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.332842   40272 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:07.332860   40272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:18:07.332914   40272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:18:07.347890   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.373144   40272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:18:07.469482   40272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:18:07.472955   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:07.515784   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.293178   40272 node_ready.go:35] waiting up to 6m0s for node "functional-374330" to be "Ready" ...
	I1202 19:18:08.293301   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.293355   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.293568   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293595   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293615   40272 retry.go:31] will retry after 144.187129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293684   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.293702   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293710   40272 retry.go:31] will retry after 132.365923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.293768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.427169   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:08.438559   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.510555   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513791   40272 retry.go:31] will retry after 461.570102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513742   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.513825   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.513833   40272 retry.go:31] will retry after 354.67857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.794133   40272 type.go:168] "Request Body" body=""
	I1202 19:18:08.794203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:08.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:08.868974   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:08.929070   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:08.932369   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.932402   40272 retry.go:31] will retry after 765.19043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:08.975575   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.036469   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.042296   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.042376   40272 retry.go:31] will retry after 433.124039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.293618   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.293713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:09.476440   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:09.538441   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.541412   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.541444   40272 retry.go:31] will retry after 747.346338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.698768   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:09.764666   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:09.764703   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.764723   40272 retry.go:31] will retry after 541.76994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:09.793827   40272 type.go:168] "Request Body" body=""
	I1202 19:18:09.793965   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:09.794261   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:10.289986   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:10.293340   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.293732   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:10.293780   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:10.307063   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:10.373573   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.373608   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.373627   40272 retry.go:31] will retry after 1.037281057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388739   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:10.388813   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.388864   40272 retry.go:31] will retry after 1.072570226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:10.794280   40272 type.go:168] "Request Body" body=""
	I1202 19:18:10.794348   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:10.794651   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.293375   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.293466   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.293739   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:11.411088   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:11.462503   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:11.470558   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.470603   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.470624   40272 retry.go:31] will retry after 2.459470693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530455   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:11.530510   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.530529   40272 retry.go:31] will retry after 2.35440359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:11.794013   40272 type.go:168] "Request Body" body=""
	I1202 19:18:11.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:11.794477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:12.294194   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.294271   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:12.294648   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:12.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:12.793567   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:12.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.793595   40272 type.go:168] "Request Body" body=""
	I1202 19:18:13.793686   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:13.794006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:13.885433   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:13.930854   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:13.940303   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:13.943330   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:13.943359   40272 retry.go:31] will retry after 2.562469282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000907   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:14.000951   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.000969   40272 retry.go:31] will retry after 3.172954134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:14.294316   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.294381   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:14.793366   40272 type.go:168] "Request Body" body=""
	I1202 19:18:14.793435   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:14.793778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:14.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:15.293495   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:15.793590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:15.793675   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:15.794004   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.293435   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.293890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:16.506093   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:16.576298   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:16.580372   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.580403   40272 retry.go:31] will retry after 6.193423377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:16.793925   40272 type.go:168] "Request Body" body=""
	I1202 19:18:16.794050   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:16.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:16.794410   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:17.174990   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:17.234065   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:17.234161   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.234184   40272 retry.go:31] will retry after 6.017051757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:17.293565   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.293640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:17.793940   40272 type.go:168] "Request Body" body=""
	I1202 19:18:17.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:17.794318   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.294120   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.294191   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.294497   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:18.794258   40272 type.go:168] "Request Body" body=""
	I1202 19:18:18.794341   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:18.794641   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:18.794693   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:19.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:19.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:18:19.793693   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:19.794032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.293712   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:20.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:20.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:20.793838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:21.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:21.293929   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:21.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:18:21.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:21.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.293417   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.774666   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:22.793983   40272 type.go:168] "Request Body" body=""
	I1202 19:18:22.794053   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:22.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:22.835259   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:22.835293   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:22.835313   40272 retry.go:31] will retry after 8.891499319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.251502   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:23.293920   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.293995   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.294305   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:23.294361   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:23.316803   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:23.325390   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.325420   40272 retry.go:31] will retry after 5.436174555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:23.794140   40272 type.go:168] "Request Body" body=""
	I1202 19:18:23.794209   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:23.794514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.294165   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.294234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.294532   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:24.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:18:24.794307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:24.794552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:25.294405   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.294476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.294786   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:25.294838   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:25.793518   40272 type.go:168] "Request Body" body=""
	I1202 19:18:25.793593   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:25.793954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.293881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:26.793441   40272 type.go:168] "Request Body" body=""
	I1202 19:18:26.793515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:26.793898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.293636   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.294038   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:27.793924   40272 type.go:168] "Request Body" body=""
	I1202 19:18:27.793994   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:27.794242   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:27.794290   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:28.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.294085   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.294398   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.762126   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:28.793717   40272 type.go:168] "Request Body" body=""
	I1202 19:18:28.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:28.794058   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:28.820417   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:28.820461   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:28.820480   40272 retry.go:31] will retry after 5.23527752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:29.294048   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.294387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:29.794183   40272 type.go:168] "Request Body" body=""
	I1202 19:18:29.794303   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:29.794634   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:29.794706   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:30.294267   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.294340   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.294624   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:30.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:30.793398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:30.793762   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.293466   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.293841   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:31.727474   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:31.785329   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:31.788538   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.788571   40272 retry.go:31] will retry after 14.027342391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:31.793764   40272 type.go:168] "Request Body" body=""
	I1202 19:18:31.793834   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:31.794170   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:32.293926   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.293991   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.294245   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:32.294283   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:32.794305   40272 type.go:168] "Request Body" body=""
	I1202 19:18:32.794380   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:32.794731   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.293682   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.294006   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:33.793493   40272 type.go:168] "Request Body" body=""
	I1202 19:18:33.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:33.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:34.056328   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:34.114988   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:34.115034   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.115053   40272 retry.go:31] will retry after 20.825216377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:34.294372   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.294768   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:34.294823   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:34.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:18:34.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:34.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.293815   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.293900   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.294151   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:35.793855   40272 type.go:168] "Request Body" body=""
	I1202 19:18:35.793935   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:35.794205   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.293483   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:36.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:18:36.793564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:36.793873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:36.793925   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:37.293668   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.293762   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.294075   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:37.793947   40272 type.go:168] "Request Body" body=""
	I1202 19:18:37.794015   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:37.794293   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.294087   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.294335   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:38.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:38.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:38.794481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:38.794533   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:39.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.294563   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:39.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:39.794411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:39.794661   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:40.793560   40272 type.go:168] "Request Body" body=""
	I1202 19:18:40.793636   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:40.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:41.293642   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:41.294091   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:41.793737   40272 type.go:168] "Request Body" body=""
	I1202 19:18:41.793809   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:41.794119   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:42.294249   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.294351   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.295481   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1202 19:18:42.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:18:42.794309   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:42.794549   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:43.294307   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.294779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:43.294833   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:43.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:18:43.793526   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:43.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.293539   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.293609   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.293876   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:44.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:18:44.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.293775   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.294288   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:45.794074   40272 type.go:168] "Request Body" body=""
	I1202 19:18:45.794139   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:45.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:45.794427   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:45.816754   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:45.885215   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:45.888326   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:45.888364   40272 retry.go:31] will retry after 11.821193731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:46.293908   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.293987   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.294332   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:46.794097   40272 type.go:168] "Request Body" body=""
	I1202 19:18:46.794188   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:46.794450   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.294325   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.294656   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:47.793465   40272 type.go:168] "Request Body" body=""
	I1202 19:18:47.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:47.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:48.293461   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.293549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:48.293980   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:48.793521   40272 type.go:168] "Request Body" body=""
	I1202 19:18:48.793585   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:48.793925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.293671   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.293755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.294085   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:49.793786   40272 type.go:168] "Request Body" body=""
	I1202 19:18:49.793857   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:49.794203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:50.293936   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.294005   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.294362   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:50.794095   40272 type.go:168] "Request Body" body=""
	I1202 19:18:50.794170   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:50.794494   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.294326   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.294720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:51.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:18:51.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:51.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:52.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:18:52.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:52.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:52.793945   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:53.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.293667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.293927   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:53.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:18:53.793852   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:53.794188   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.294005   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.294075   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.294426   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:54.794205   40272 type.go:168] "Request Body" body=""
	I1202 19:18:54.794284   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:54.794553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:54.794600   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:54.941002   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:18:55.004086   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:55.004129   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.004148   40272 retry.go:31] will retry after 20.918145005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:55.293488   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.293564   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.293885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:55.793617   40272 type.go:168] "Request Body" body=""
	I1202 19:18:55.793707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:55.794018   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.293767   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:56.793648   40272 type.go:168] "Request Body" body=""
	I1202 19:18:56.793755   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:56.794090   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:57.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.293891   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.294211   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:57.294263   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:18:57.710107   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:18:57.765891   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:18:57.765928   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.765947   40272 retry.go:31] will retry after 13.115816401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:18:57.793988   40272 type.go:168] "Request Body" body=""
	I1202 19:18:57.794063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:57.794301   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.294217   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:58.793333   40272 type.go:168] "Request Body" body=""
	I1202 19:18:58.793430   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:58.793738   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.293442   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.293550   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:18:59.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:18:59.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:18:59.793871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:18:59.793930   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:00.295673   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.295757   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.296162   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:00.793971   40272 type.go:168] "Request Body" body=""
	I1202 19:19:00.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:00.794393   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.294295   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.294639   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:01.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:19:01.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:01.793817   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:02.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:02.293931   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:02.793522   40272 type.go:168] "Request Body" body=""
	I1202 19:19:02.793600   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:02.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.293690   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.293758   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.294007   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:03.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:19:03.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:03.793884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:04.293572   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:04.294031   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:04.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:04.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:04.793792   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:05.793473   40272 type.go:168] "Request Body" body=""
	I1202 19:19:05.793568   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:05.793916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.293590   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.293673   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.293971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:06.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:19:06.793528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:06.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:06.793897   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:07.293734   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.293806   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.294152   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:07.793956   40272 type.go:168] "Request Body" body=""
	I1202 19:19:07.794035   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:07.794289   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.294051   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.294130   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.294477   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:08.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:19:08.794232   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:08.794588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:08.794644   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:09.294344   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.294413   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.294705   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:09.793394   40272 type.go:168] "Request Body" body=""
	I1202 19:19:09.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:09.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.293916   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:19:10.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:10.793812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:10.882157   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:10.938212   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:10.938272   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:10.938296   40272 retry.go:31] will retry after 16.990081142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:11.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.293533   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:11.293912   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:11.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:11.793554   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:11.793893   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.293805   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:12.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:12.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:12.793829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:13.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.293887   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:13.293939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:13.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:13.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:13.793901   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.293451   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.293545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:14.793538   40272 type.go:168] "Request Body" body=""
	I1202 19:19:14.793612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:14.793947   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.293500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.293781   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:15.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:19:15.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:15.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:15.793881   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:15.923138   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:15.976380   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:15.979446   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:15.979475   40272 retry.go:31] will retry after 43.938975662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 19:19:16.293891   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.293966   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.294319   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:16.793918   40272 type.go:168] "Request Body" body=""
	I1202 19:19:16.794007   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:16.794273   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.293817   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.293889   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.294222   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:17.794224   40272 type.go:168] "Request Body" body=""
	I1202 19:19:17.794322   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:17.794659   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:17.794718   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:18.293644   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.293745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:18.793819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:18.793896   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:18.794214   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.294047   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.294119   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.294429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:19.794155   40272 type.go:168] "Request Body" body=""
	I1202 19:19:19.794251   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:19.794516   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:20.294336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.294409   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.294750   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:20.294804   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:20.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:19:20.793549   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:20.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.293392   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:21.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:21.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:21.793880   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:22.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:22.793814   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:22.794072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:22.794110   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:23.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.293552   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:23.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:23.793520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:23.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.293676   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:24.793402   40272 type.go:168] "Request Body" body=""
	I1202 19:19:24.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:24.793777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:25.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:25.293933   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:25.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:19:25.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:25.793822   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.293528   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.293870   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:26.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:19:26.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:26.794001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.293786   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.293876   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:27.294188   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:27.794144   40272 type.go:168] "Request Body" body=""
	I1202 19:19:27.794229   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:27.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:27.928884   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:19:27.980862   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983877   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:27.983967   40272 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:28.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.293635   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.293939   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:28.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:19:28.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:28.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.293888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:29.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:19:29.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:29.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:29.793943   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:30.293604   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.293690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.293949   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:30.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:19:30.793541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:30.793879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.293681   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.294045   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:31.793596   40272 type.go:168] "Request Body" body=""
	I1202 19:19:31.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:31.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:31.793973   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:32.293633   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.293736   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.294100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:32.794048   40272 type.go:168] "Request Body" body=""
	I1202 19:19:32.794127   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:32.794454   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.294107   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.294193   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.294469   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:33.794161   40272 type.go:168] "Request Body" body=""
	I1202 19:19:33.794241   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:33.794576   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:33.794630   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:34.294318   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.294390   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.294756   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:34.793348   40272 type.go:168] "Request Body" body=""
	I1202 19:19:34.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:34.793816   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.293934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:35.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:19:35.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:35.793853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:36.293403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.293796   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:36.293849   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:36.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:19:36.793604   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:36.793910   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.293819   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.293921   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.294237   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:37.793992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:37.794062   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:37.794317   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:38.294129   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.294219   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.294552   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:38.294607   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:38.794375   40272 type.go:168] "Request Body" body=""
	I1202 19:19:38.794449   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:38.794753   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:39.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:19:39.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:39.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.293464   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:40.793609   40272 type.go:168] "Request Body" body=""
	I1202 19:19:40.793726   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:40.793971   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:40.794046   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:41.293697   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.293783   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.294101   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:41.793762   40272 type.go:168] "Request Body" body=""
	I1202 19:19:41.793835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:41.794208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.293532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:42.793895   40272 type.go:168] "Request Body" body=""
	I1202 19:19:42.793974   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:42.794274   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:42.794330   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:43.293462   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.293536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.293847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:43.793403   40272 type.go:168] "Request Body" body=""
	I1202 19:19:43.793470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:43.793794   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.293875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:44.793570   40272 type.go:168] "Request Body" body=""
	I1202 19:19:44.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:44.793981   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:45.293992   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.294153   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.294968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:45.295095   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:45.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:19:45.793517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:45.793874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.293433   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.293523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:46.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:19:46.793672   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:46.794005   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.294181   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:47.794191   40272 type.go:168] "Request Body" body=""
	I1202 19:19:47.794264   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:47.794574   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:47.794634   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:48.294351   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.294414   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.294658   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:48.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:48.793458   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:48.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.293548   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.293622   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:49.793638   40272 type.go:168] "Request Body" body=""
	I1202 19:19:49.793723   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:49.793982   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:50.293669   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.293738   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.294063   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:50.294115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:50.793649   40272 type.go:168] "Request Body" body=""
	I1202 19:19:50.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:50.794030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.293404   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.293477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:51.793444   40272 type.go:168] "Request Body" body=""
	I1202 19:19:51.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:51.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.293605   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.293689   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:52.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:19:52.794056   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:52.794307   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:52.794355   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:53.294127   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.294542   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:53.794333   40272 type.go:168] "Request Body" body=""
	I1202 19:19:53.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:53.794789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.293367   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.293448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:54.793399   40272 type.go:168] "Request Body" body=""
	I1202 19:19:54.793485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:54.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:55.293465   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.293544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.293912   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:55.293970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:55.793387   40272 type.go:168] "Request Body" body=""
	I1202 19:19:55.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:55.793748   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.293378   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.293444   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.293784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:56.793485   40272 type.go:168] "Request Body" body=""
	I1202 19:19:56.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:56.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:57.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.293823   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:57.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:57.794072   40272 type.go:168] "Request Body" body=""
	I1202 19:19:57.794142   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:57.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.294203   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.294515   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:58.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:19:58.794402   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:58.794662   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.293346   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.293443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:19:59.793412   40272 type.go:168] "Request Body" body=""
	I1202 19:19:59.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:19:59.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:19:59.793894   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:19:59.919155   40272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:19:59.978732   40272 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978768   40272 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 19:19:59.978842   40272 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 19:19:59.981270   40272 out.go:179] * Enabled addons: 
	I1202 19:19:59.984008   40272 addons.go:530] duration metric: took 1m52.724080055s for enable addons: enabled=[]
	I1202 19:20:00.293431   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.319155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=25
	I1202 19:20:00.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:00.793581   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:00.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.293643   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.294269   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:01.794085   40272 type.go:168] "Request Body" body=""
	I1202 19:20:01.794165   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:01.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:01.794475   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:02.294283   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.294801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:02.793839   40272 type.go:168] "Request Body" body=""
	I1202 19:20:02.793918   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:02.794224   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.293780   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.293848   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.294097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:03.793818   40272 type.go:168] "Request Body" body=""
	I1202 19:20:03.793890   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:03.794190   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:04.294069   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.294138   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.294439   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:04.294488   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:04.794180   40272 type.go:168] "Request Body" body=""
	I1202 19:20:04.794261   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:04.794525   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.294270   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.294339   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.294637   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:05.793358   40272 type.go:168] "Request Body" body=""
	I1202 19:20:05.793447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:05.793770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:06.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:06.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:06.794145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:06.794195   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:07.293975   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.294054   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.294413   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:07.794308   40272 type.go:168] "Request Body" body=""
	I1202 19:20:07.794425   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:07.794772   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.293671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.294020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:08.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:08.793557   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:08.793899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:09.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.293474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.293769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:09.293828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:09.794253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:09.794326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:09.794686   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.293416   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:10.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:10.793790   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:11.293475   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.293548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:11.293934   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:11.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:11.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:11.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.293544   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.293610   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.293915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:12.793833   40272 type.go:168] "Request Body" body=""
	I1202 19:20:12.793916   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:12.794241   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:13.293799   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.293872   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.294179   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:13.294238   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:13.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:20:13.794022   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:13.794276   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.294026   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.294105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.294453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:14.794135   40272 type.go:168] "Request Body" body=""
	I1202 19:20:14.794207   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:14.794524   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:15.294253   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.294326   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:15.294638   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:15.793355   40272 type.go:168] "Request Body" body=""
	I1202 19:20:15.793426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:15.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.293453   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.293882   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:16.793551   40272 type.go:168] "Request Body" body=""
	I1202 19:20:16.793621   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:16.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.293774   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.293867   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:17.794117   40272 type.go:168] "Request Body" body=""
	I1202 19:20:17.794213   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:17.794539   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:17.794594   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:18.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.294374   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:18.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:18.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:18.794070   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.293525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:19.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:20:19.793475   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:19.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:20.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.293534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.293900   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:20.293961   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:20.793436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:20.793512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:20.793848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.293924   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:21.793463   40272 type.go:168] "Request Body" body=""
	I1202 19:20:21.793540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:21.793956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.293478   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:22.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:22.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:22.793771   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:22.793827   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:23.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.293868   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:23.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:20:23.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:23.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.293436   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.293506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:24.793430   40272 type.go:168] "Request Body" body=""
	I1202 19:20:24.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:24.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:24.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:25.293608   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.293707   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.294025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:25.793471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:25.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:25.793831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.293513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:26.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:20:26.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:26.794022   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:26.794082   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:27.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.293785   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.294032   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:27.793959   40272 type.go:168] "Request Body" body=""
	I1202 19:20:27.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:27.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.294157   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.294237   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.294582   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:28.794354   40272 type.go:168] "Request Body" body=""
	I1202 19:20:28.794429   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:28.794706   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:28.794758   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:29.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:29.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:20:29.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:29.793883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.293432   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.293782   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:30.793503   40272 type.go:168] "Request Body" body=""
	I1202 19:20:30.793582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:30.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:31.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.293580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.293930   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:31.293985   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:31.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:20:31.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.293429   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.293854   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:32.793797   40272 type.go:168] "Request Body" body=""
	I1202 19:20:32.793874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:32.794194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:33.293954   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.294018   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.294268   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:33.294307   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:33.794022   40272 type.go:168] "Request Body" body=""
	I1202 19:20:33.794093   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:33.794394   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.294075   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.294145   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.294479   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:34.794081   40272 type.go:168] "Request Body" body=""
	I1202 19:20:34.794161   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:34.794411   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:35.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.294307   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.294631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:35.294684   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:35.794291   40272 type.go:168] "Request Body" body=""
	I1202 19:20:35.794361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:35.794710   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.294305   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.294383   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.294672   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:36.793421   40272 type.go:168] "Request Body" body=""
	I1202 19:20:36.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:36.793869   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.293817   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.294175   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:37.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:20:37.794113   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:37.794365   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:37.794404   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:38.294151   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.294244   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.294567   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:38.794364   40272 type.go:168] "Request Body" body=""
	I1202 19:20:38.794441   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:38.794795   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:39.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:20:39.793688   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:39.794051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:40.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.293749   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.294072   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:40.294131   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:40.793755   40272 type.go:168] "Request Body" body=""
	I1202 19:20:40.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:40.794137   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.293804   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.293874   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.294208   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:41.794044   40272 type.go:168] "Request Body" body=""
	I1202 19:20:41.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:41.794437   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:42.294271   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.294354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.294638   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:42.294682   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:42.793464   40272 type.go:168] "Request Body" body=""
	I1202 19:20:42.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:42.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.293529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.293884   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:43.793555   40272 type.go:168] "Request Body" body=""
	I1202 19:20:43.793624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:43.793904   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.293582   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.293677   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:44.793724   40272 type.go:168] "Request Body" body=""
	I1202 19:20:44.793796   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:44.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:44.794158   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:45.293768   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.293839   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.294135   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:45.794039   40272 type.go:168] "Request Body" body=""
	I1202 19:20:45.794110   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:45.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.294279   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.294679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:46.793388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:46.793455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:46.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:47.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.293786   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.294051   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:47.294093   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:47.794031   40272 type.go:168] "Request Body" body=""
	I1202 19:20:47.794101   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:47.794424   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.294153   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.294227   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.294472   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:48.794239   40272 type.go:168] "Request Body" body=""
	I1202 19:20:48.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:48.794680   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.293388   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.293461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.293815   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:49.793404   40272 type.go:168] "Request Body" body=""
	I1202 19:20:49.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:49.793801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:49.793850   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:50.293494   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.293573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.293926   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:50.793499   40272 type.go:168] "Request Body" body=""
	I1202 19:20:50.793579   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:50.793881   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.293925   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:51.793716   40272 type.go:168] "Request Body" body=""
	I1202 19:20:51.793794   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:51.794124   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:51.794181   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:52.293850   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.293930   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.294277   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:52.794083   40272 type.go:168] "Request Body" body=""
	I1202 19:20:52.794149   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:52.794406   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.294121   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.294195   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.294529   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:53.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:20:53.794350   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:53.794679   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:53.794733   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:54.293471   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.293541   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:54.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:20:54.793494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:54.793825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:55.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:20:55.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:55.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:56.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.293455   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.293777   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:56.293831   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:56.793498   40272 type.go:168] "Request Body" body=""
	I1202 19:20:56.793574   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:56.793934   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.293625   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.293700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.293941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:57.793858   40272 type.go:168] "Request Body" body=""
	I1202 19:20:57.793928   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:57.794244   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:58.294012   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.294083   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.294416   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:20:58.294470   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:20:58.794152   40272 type.go:168] "Request Body" body=""
	I1202 19:20:58.794222   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:58.794483   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.294312   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.294645   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:20:59.794292   40272 type.go:168] "Request Body" body=""
	I1202 19:20:59.794364   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:20:59.794674   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.293476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.293799   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:00.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:00.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:00.793832   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:00.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:01.293577   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:01.793727   40272 type.go:168] "Request Body" body=""
	I1202 19:21:01.793804   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:01.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.293823   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.293903   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.294253   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:02.794285   40272 type.go:168] "Request Body" body=""
	I1202 19:21:02.794354   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:02.794650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:02.794701   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:03.293400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.293470   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:03.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:03.793544   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:03.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.293824   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:04.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:04.793464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:04.793783   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:05.293327   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.293398   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.293720   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:05.293767   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:05.794396   40272 type.go:168] "Request Body" body=""
	I1202 19:21:05.794464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:05.794774   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.293412   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.293683   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:06.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:06.793543   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:06.793909   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:07.293810   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.293905   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.294228   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:07.294294   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:07.794228   40272 type.go:168] "Request Body" body=""
	I1202 19:21:07.794296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:07.794557   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.294314   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.294391   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.294721   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:08.793437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:08.793513   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:08.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.293515   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.293883   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:09.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:09.793507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:09.793849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:09.793915   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:10.293585   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.293946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:10.793633   40272 type.go:168] "Request Body" body=""
	I1202 19:21:10.793713   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:10.794014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.293509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.293862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:11.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:11.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:11.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.293767   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:12.293819   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:12.793445   40272 type.go:168] "Request Body" body=""
	I1202 19:21:12.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:12.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.293560   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.293641   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:13.793415   40272 type.go:168] "Request Body" body=""
	I1202 19:21:13.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:13.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:14.293447   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.293853   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:14.293920   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:14.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:21:14.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:14.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.293520   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.293586   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.293861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:15.793540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:15.793613   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:15.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:16.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.293615   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:16.293998   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:16.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:21:16.793482   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:16.793803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.293689   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.293770   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:17.793898   40272 type.go:168] "Request Body" body=""
	I1202 19:21:17.793968   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:17.794294   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:18.294019   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.294082   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.294374   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:18.294428   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:18.794173   40272 type.go:168] "Request Body" body=""
	I1202 19:21:18.794258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:18.794584   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.294375   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.294447   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.294755   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:19.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:19.793492   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:19.793769   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.293838   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:20.793542   40272 type.go:168] "Request Body" body=""
	I1202 19:21:20.793614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:20.793957   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:20.794013   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:21.293675   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.293740   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:21.793459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:21.793536   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:21.793872   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.293517   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.293837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:22.793766   40272 type.go:168] "Request Body" body=""
	I1202 19:21:22.793836   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:22.794155   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:22.794204   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:23.293449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.293530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.293908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:23.793615   40272 type.go:168] "Request Body" body=""
	I1202 19:21:23.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:23.794078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:24.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:21:24.793516   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:24.793860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:25.293571   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.293642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.293963   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:25.294010   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:25.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:21:25.793479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:25.793840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.293510   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.293834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:26.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:26.793506   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:26.793850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:27.293759   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.294093   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:27.294139   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:27.794030   40272 type.go:168] "Request Body" body=""
	I1202 19:21:27.794105   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:27.794432   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.294126   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.294201   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.294546   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:28.794278   40272 type.go:168] "Request Body" body=""
	I1202 19:21:28.794342   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:28.794587   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.293336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.293499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:29.793558   40272 type.go:168] "Request Body" body=""
	I1202 19:21:29.793642   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:29.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:29.794070   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:30.293622   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.293704   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:30.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:21:30.793500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:30.793839   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.293485   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:31.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:21:31.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:31.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:32.293467   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.293899   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:32.293955   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:32.793454   40272 type.go:168] "Request Body" body=""
	I1202 19:21:32.793527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:32.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.293566   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.293634   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:33.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:33.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:33.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.293851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:34.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:21:34.793481   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:34.793759   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:34.793805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:35.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.293507   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.293873   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:35.793599   40272 type.go:168] "Request Body" body=""
	I1202 19:21:35.793691   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:35.794021   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.293711   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.293780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.294089   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:36.793879   40272 type.go:168] "Request Body" body=""
	I1202 19:21:36.793947   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:36.794270   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:36.794327   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:37.294002   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.294382   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:37.794293   40272 type.go:168] "Request Body" body=""
	I1202 19:21:37.794366   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:37.794623   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.293377   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.293463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.293793   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:38.793479   40272 type.go:168] "Request Body" body=""
	I1202 19:21:38.793551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:38.793911   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:39.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.293576   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.293857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:39.293900   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:39.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:21:39.793511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:39.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:40.793400   40272 type.go:168] "Request Body" body=""
	I1202 19:21:40.793469   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:40.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.293410   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.293820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:41.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:21:41.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:41.793779   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:41.793832   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:42.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.293551   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:42.793809   40272 type.go:168] "Request Body" body=""
	I1202 19:21:42.793881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:42.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:43.794230   40272 type.go:168] "Request Body" body=""
	I1202 19:21:43.794300   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:43.794607   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:43.794654   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:44.294246   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.294318   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.294647   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:44.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:21:44.793399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:44.793724   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.293515   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:45.793425   40272 type.go:168] "Request Body" body=""
	I1202 19:21:45.793496   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:45.793836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:46.293443   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.293848   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:46.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:46.793426   40272 type.go:168] "Request Body" body=""
	I1202 19:21:46.793501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:46.793766   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.293628   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.293717   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.294035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:47.793981   40272 type.go:168] "Request Body" body=""
	I1202 19:21:47.794060   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:47.794397   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:48.293997   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.294340   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:48.294384   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:48.794112   40272 type.go:168] "Request Body" body=""
	I1202 19:21:48.794192   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:48.794535   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.294213   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.294292   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.294585   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:49.794336   40272 type.go:168] "Request Body" body=""
	I1202 19:21:49.794401   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:49.794648   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.293343   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.293431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.293749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:50.793332   40272 type.go:168] "Request Body" body=""
	I1202 19:21:50.793431   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:50.793733   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:50.793781   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:51.294382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.294451   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.294749   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:51.794404   40272 type.go:168] "Request Body" body=""
	I1202 19:21:51.794484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:51.794827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:52.793741   40272 type.go:168] "Request Body" body=""
	I1202 19:21:52.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:52.794061   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:52.794098   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:53.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.293502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.293842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:53.793547   40272 type.go:168] "Request Body" body=""
	I1202 19:21:53.793619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:53.793965   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.293686   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.293772   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.294030   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:54.793460   40272 type.go:168] "Request Body" body=""
	I1202 19:21:54.793531   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:54.793891   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:55.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.293522   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.293858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:55.293916   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:55.793583   40272 type.go:168] "Request Body" body=""
	I1202 19:21:55.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:55.793966   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.293537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.293879   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:56.793605   40272 type.go:168] "Request Body" body=""
	I1202 19:21:56.793700   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:56.794037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:57.293742   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.293812   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.294147   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:57.294199   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:21:57.793958   40272 type.go:168] "Request Body" body=""
	I1202 19:21:57.794029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:57.794360   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.294144   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.294215   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.294530   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:58.794311   40272 type.go:168] "Request Body" body=""
	I1202 19:21:58.794384   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:58.794669   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.293382   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.293457   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:21:59.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:21:59.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:21:59.793915   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:21:59.793970   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:00.294203   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.294291   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.294597   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:00.794373   40272 type.go:168] "Request Body" body=""
	I1202 19:22:00.794448   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:00.794765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:01.793408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:01.793474   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:01.793745   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:02.293452   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.293521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.293831   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:02.293882   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:02.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:22:02.793524   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:02.794100   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.293751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.293828   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.294092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:03.793779   40272 type.go:168] "Request Body" body=""
	I1202 19:22:03.793863   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:03.794213   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:04.294013   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.294096   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.294427   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:04.294479   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:04.794192   40272 type.go:168] "Request Body" body=""
	I1202 19:22:04.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:04.794518   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.294290   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.294361   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.294692   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:05.793413   40272 type.go:168] "Request Body" body=""
	I1202 19:22:05.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:05.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.293537   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.293611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.293889   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:06.793449   40272 type.go:168] "Request Body" body=""
	I1202 19:22:06.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:06.793886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:06.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:07.293450   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.293561   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.294001   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:07.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:22:07.794120   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:07.794431   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.294242   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.294315   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.294633   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:08.793325   40272 type.go:168] "Request Body" body=""
	I1202 19:22:08.793395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:08.793730   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:09.293421   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:09.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:09.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:22:09.793537   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:09.793938   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.293512   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.293605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.293914   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:10.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:22:10.793473   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:10.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:11.293419   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.293846   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:11.293911   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:11.793571   40272 type.go:168] "Request Body" body=""
	I1202 19:22:11.793667   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:11.793998   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.293707   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.294044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:12.794038   40272 type.go:168] "Request Body" body=""
	I1202 19:22:12.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:12.794457   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:13.294219   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.294294   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.294608   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:13.294662   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:13.793319   40272 type.go:168] "Request Body" body=""
	I1202 19:22:13.793385   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:13.793631   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.293401   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.293494   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.293827   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:14.793568   40272 type.go:168] "Request Body" body=""
	I1202 19:22:14.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:14.793974   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.293634   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.293715   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.294019   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:15.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:15.793580   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:15.793905   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:15.793957   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:16.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.293753   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.294105   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:16.793751   40272 type.go:168] "Request Body" body=""
	I1202 19:22:16.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:16.794139   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.294035   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.294104   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.294447   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:17.794420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:17.794500   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:17.794802   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:17.794864   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:18.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.293803   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:18.793470   40272 type.go:168] "Request Body" body=""
	I1202 19:22:18.793539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:18.793908   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.293459   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.293535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.293874   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:19.793487   40272 type.go:168] "Request Body" body=""
	I1202 19:22:19.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:19.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:20.293588   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.293670   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.293990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:20.294043   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:20.793747   40272 type.go:168] "Request Body" body=""
	I1202 19:22:20.793818   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:20.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.293756   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.293829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.294078   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:21.793486   40272 type.go:168] "Request Body" body=""
	I1202 19:22:21.793559   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:21.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.293599   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.293684   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.293961   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:22.793847   40272 type.go:168] "Request Body" body=""
	I1202 19:22:22.793919   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:22.794173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:22.794221   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:23.294004   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.294077   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.294391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:23.794182   40272 type.go:168] "Request Body" body=""
	I1202 19:22:23.794263   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:23.794569   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.294310   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.294382   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.294678   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:24.793423   40272 type.go:168] "Request Body" body=""
	I1202 19:22:24.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:24.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:25.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.293497   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.293849   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:25.293899   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:25.793411   40272 type.go:168] "Request Body" body=""
	I1202 19:22:25.793491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:25.793784   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.293440   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.293511   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:26.793432   40272 type.go:168] "Request Body" body=""
	I1202 19:22:26.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:26.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:27.293716   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.293790   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.294037   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:27.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:27.794020   40272 type.go:168] "Request Body" body=""
	I1202 19:22:27.794114   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:27.794453   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.294228   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.294302   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.294604   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:28.794372   40272 type.go:168] "Request Body" body=""
	I1202 19:22:28.794442   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:28.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.293384   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.293459   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.293809   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:29.793369   40272 type.go:168] "Request Body" body=""
	I1202 19:22:29.793452   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:29.793775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:29.793828   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:30.293430   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.293826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:30.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:30.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:30.793820   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.293540   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.293618   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.293975   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:31.793639   40272 type.go:168] "Request Body" body=""
	I1202 19:22:31.793724   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:31.794026   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:31.794076   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:32.293441   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.293514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.293867   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:32.793458   40272 type.go:168] "Request Body" body=""
	I1202 19:22:32.793534   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:32.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.293479   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.293808   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:33.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:33.793577   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:33.793932   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:34.293638   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.293733   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.294053   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:34.294138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:34.793757   40272 type.go:168] "Request Body" body=""
	I1202 19:22:34.793829   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:34.794123   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.293805   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.293875   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.294212   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:35.793796   40272 type.go:168] "Request Body" body=""
	I1202 19:22:35.793870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:35.794183   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:36.293916   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.293981   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.294225   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:36.294266   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:36.793977   40272 type.go:168] "Request Body" body=""
	I1202 19:22:36.794051   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:36.794349   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.294147   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.294225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.294553   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:37.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:22:37.794437   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:37.794726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.293425   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.293504   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.293852   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:38.793561   40272 type.go:168] "Request Body" body=""
	I1202 19:22:38.793639   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:38.793979   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:38.794037   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:39.293408   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.293489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.293812   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:39.793433   40272 type.go:168] "Request Body" body=""
	I1202 19:22:39.793508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:39.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.293428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.293501   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.293825   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:40.793396   40272 type.go:168] "Request Body" body=""
	I1202 19:22:40.793461   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:40.793725   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:41.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.293519   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:41.293919   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:41.793428   40272 type.go:168] "Request Body" body=""
	I1202 19:22:41.793502   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:41.793837   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.306206   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.306286   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.306588   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:42.793431   40272 type.go:168] "Request Body" body=""
	I1202 19:22:42.793505   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:42.793842   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:43.293564   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.293646   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.294014   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:43.294075   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:43.793719   40272 type.go:168] "Request Body" body=""
	I1202 19:22:43.793791   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:43.794033   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.293420   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.293493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.293840   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:44.794154   40272 type.go:168] "Request Body" body=""
	I1202 19:22:44.794225   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:44.794573   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.293335   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.293432   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.293823   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:45.793584   40272 type.go:168] "Request Body" body=""
	I1202 19:22:45.793699   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:45.794020   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:45.794077   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:46.293765   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.293835   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.294194   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:46.793979   40272 type.go:168] "Request Body" body=""
	I1202 19:22:46.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:46.794328   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.294352   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.294421   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.294757   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:47.793438   40272 type.go:168] "Request Body" body=""
	I1202 19:22:47.793514   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:47.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:48.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.293488   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:48.293860   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:48.793501   40272 type.go:168] "Request Body" body=""
	I1202 19:22:48.793573   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:48.793896   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.293621   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.293725   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:49.793746   40272 type.go:168] "Request Body" body=""
	I1202 19:22:49.793826   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:49.794140   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:50.293958   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.294029   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.294356   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:50.294412   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:50.794160   40272 type.go:168] "Request Body" body=""
	I1202 19:22:50.794234   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:50.794577   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.294330   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.294397   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.294654   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:51.793328   40272 type.go:168] "Request Body" body=""
	I1202 19:22:51.793400   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:51.793736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.293414   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.293818   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:52.793409   40272 type.go:168] "Request Body" body=""
	I1202 19:22:52.793476   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:52.793765   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:52.793817   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:53.293347   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.293829   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:53.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:22:53.793594   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:53.793990   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.293543   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.293619   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.293933   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:54.793461   40272 type.go:168] "Request Body" body=""
	I1202 19:22:54.793545   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:54.793885   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:54.793939   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:55.293468   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.293897   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:55.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:22:55.793627   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:55.793890   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.293469   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.293539   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.293845   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:56.793575   40272 type.go:168] "Request Body" body=""
	I1202 19:22:56.793643   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:56.793943   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:56.793996   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:57.293776   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.293861   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.294145   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:57.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:22:57.794158   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:57.794484   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.294275   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.294346   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.294665   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:58.793386   40272 type.go:168] "Request Body" body=""
	I1202 19:22:58.793463   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:58.793763   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:22:59.293456   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.293540   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.293903   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:22:59.293962   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:22:59.793451   40272 type.go:168] "Request Body" body=""
	I1202 19:22:59.793525   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:22:59.793862   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.296332   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.296406   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.296694   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:00.793405   40272 type.go:168] "Request Body" body=""
	I1202 19:23:00.793484   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:00.793819   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.293427   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.293498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.293844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:01.793424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:01.793493   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:01.793826   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:01.793879   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:02.293549   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.293637   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.294144   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:02.793976   40272 type.go:168] "Request Body" body=""
	I1202 19:23:02.794047   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:02.794355   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.294017   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.294088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.294379   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:03.794041   40272 type.go:168] "Request Body" body=""
	I1202 19:23:03.794118   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:03.794444   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:03.794495   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:04.294106   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.294176   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.294496   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:04.794236   40272 type.go:168] "Request Body" body=""
	I1202 19:23:04.794365   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:04.794711   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.293422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.293850   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:05.793529   40272 type.go:168] "Request Body" body=""
	I1202 19:23:05.793605   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:05.793941   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:06.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.293719   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.294067   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:06.294117   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:06.793866   40272 type.go:168] "Request Body" body=""
	I1202 19:23:06.793938   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:06.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.293887   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.293967   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.294287   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:07.794086   40272 type.go:168] "Request Body" body=""
	I1202 19:23:07.794150   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:07.794403   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:08.294171   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.294258   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.294594   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:08.294647   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:08.793335   40272 type.go:168] "Request Body" body=""
	I1202 19:23:08.793404   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:08.793760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.293426   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.293810   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:09.793478   40272 type.go:168] "Request Body" body=""
	I1202 19:23:09.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:09.793856   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.293538   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.293617   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.293956   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:10.793532   40272 type.go:168] "Request Body" body=""
	I1202 19:23:10.793599   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:10.793863   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:10.793903   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:11.293547   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.293625   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.293948   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:11.793691   40272 type.go:168] "Request Body" body=""
	I1202 19:23:11.793764   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:11.794076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.293464   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.293775   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:12.793673   40272 type.go:168] "Request Body" body=""
	I1202 19:23:12.793745   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:12.794066   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:12.794115   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:13.293795   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.293870   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.294207   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:13.793969   40272 type.go:168] "Request Body" body=""
	I1202 19:23:13.794033   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:13.794283   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.294039   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.294109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.294436   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:14.794094   40272 type.go:168] "Request Body" body=""
	I1202 19:23:14.794171   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:14.794488   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:14.794541   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:15.294282   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.294357   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.294611   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:15.794365   40272 type.go:168] "Request Body" body=""
	I1202 19:23:15.794443   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:15.794770   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.293438   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.293508   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.293836   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:16.793406   40272 type.go:168] "Request Body" body=""
	I1202 19:23:16.793477   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:16.793791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:17.293700   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.293776   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.294065   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:17.294109   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:17.793903   40272 type.go:168] "Request Body" body=""
	I1202 19:23:17.793973   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:17.794593   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.294328   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.294399   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.294646   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:18.793322   40272 type.go:168] "Request Body" body=""
	I1202 19:23:18.793392   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:18.793726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.293424   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.293503   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.293833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:19.793414   40272 type.go:168] "Request Body" body=""
	I1202 19:23:19.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:19.793807   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:19.793870   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:20.293525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.293596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.293940   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:20.793525   40272 type.go:168] "Request Body" body=""
	I1202 19:23:20.793601   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:20.793946   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.293620   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.293705   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.294002   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:21.793707   40272 type.go:168] "Request Body" body=""
	I1202 19:23:21.793780   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:21.794097   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:21.794151   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:22.293820   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.293892   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.294246   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:22.794023   40272 type.go:168] "Request Body" body=""
	I1202 19:23:22.794088   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:22.794347   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.294098   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.294169   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.294495   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:23.794344   40272 type.go:168] "Request Body" body=""
	I1202 19:23:23.794436   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:23.794764   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:23.794818   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:24.293402   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.293471   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:24.793418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:24.793495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:24.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.293624   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.293973   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:25.793669   40272 type.go:168] "Request Body" body=""
	I1202 19:23:25.793735   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:25.793985   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:26.293681   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.293789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.294111   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:26.294163   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:26.793710   40272 type.go:168] "Request Body" body=""
	I1202 19:23:26.793789   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:26.794114   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.293843   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.293914   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.294239   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:27.794080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:27.794155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:27.794487   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:28.294258   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.294337   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.294650   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:28.294705   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:28.793349   40272 type.go:168] "Request Body" body=""
	I1202 19:23:28.793419   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:28.793701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.294241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.294352   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.294701   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:29.793416   40272 type.go:168] "Request Body" body=""
	I1202 19:23:29.793489   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:29.793834   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.293509   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.293582   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.293886   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:30.793466   40272 type.go:168] "Request Body" body=""
	I1202 19:23:30.793535   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:30.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:30.793940   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:31.293409   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.293491   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.293801   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:31.793457   40272 type.go:168] "Request Body" body=""
	I1202 19:23:31.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:31.793828   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.293492   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.293560   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.293860   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:32.793447   40272 type.go:168] "Request Body" body=""
	I1202 19:23:32.793523   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:32.793851   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:33.293498   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.293569   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.293865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:33.293906   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:33.793580   40272 type.go:168] "Request Body" body=""
	I1202 19:23:33.793671   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:33.793968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.293678   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.293781   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.294103   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:34.793774   40272 type.go:168] "Request Body" body=""
	I1202 19:23:34.793844   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:34.794094   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:35.293808   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.293879   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.294203   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:35.294261   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:35.794011   40272 type.go:168] "Request Body" body=""
	I1202 19:23:35.794103   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:35.794387   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.294141   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.294296   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.294603   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:36.794385   40272 type.go:168] "Request Body" body=""
	I1202 19:23:36.794460   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:36.794791   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.293721   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.293800   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.294132   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:37.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:23:37.794036   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:37.794297   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:37.794344   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:38.294080   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.294155   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.294482   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:38.794270   40272 type.go:168] "Request Body" body=""
	I1202 19:23:38.794347   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:38.794663   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.293411   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.293490   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.293789   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:39.793476   40272 type.go:168] "Request Body" body=""
	I1202 19:23:39.793548   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:39.793865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:40.293455   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.293527   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.293907   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:40.293963   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:40.793523   40272 type.go:168] "Request Body" body=""
	I1202 19:23:40.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:40.793858   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.293506   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.293598   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.293954   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:41.793417   40272 type.go:168] "Request Body" body=""
	I1202 19:23:41.793486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:41.793844   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.293444   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.293898   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:42.793891   40272 type.go:168] "Request Body" body=""
	I1202 19:23:42.793960   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:42.794275   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:42.794326   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:43.294061   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.294133   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.294467   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:43.794241   40272 type.go:168] "Request Body" body=""
	I1202 19:23:43.794316   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:43.794572   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.294331   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.294411   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.294778   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:44.793422   40272 type.go:168] "Request Body" body=""
	I1202 19:23:44.793499   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:44.793857   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:45.293554   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.293631   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.293928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:45.293977   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:45.793442   40272 type.go:168] "Request Body" body=""
	I1202 19:23:45.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:45.793835   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.293534   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.293612   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.294003   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:46.793541   40272 type.go:168] "Request Body" body=""
	I1202 19:23:46.793611   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:46.793878   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:47.293767   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.293837   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.294173   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:47.294229   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:47.794221   40272 type.go:168] "Request Body" body=""
	I1202 19:23:47.794317   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:47.794702   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.293397   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.293486   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.293760   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:48.793453   40272 type.go:168] "Request Body" body=""
	I1202 19:23:48.793530   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:48.793847   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.293446   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.293546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.293944   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:49.793512   40272 type.go:168] "Request Body" body=""
	I1202 19:23:49.793587   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:49.793875   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:49.793918   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:50.293594   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.293685   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.294016   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:50.793739   40272 type.go:168] "Request Body" body=""
	I1202 19:23:50.793816   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:50.794142   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.293812   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.293881   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.294164   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:51.793945   40272 type.go:168] "Request Body" body=""
	I1202 19:23:51.794024   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:51.794370   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:51.794425   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:52.294111   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.294180   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.294514   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:52.794387   40272 type.go:168] "Request Body" body=""
	I1202 19:23:52.794468   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:52.794736   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.293439   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.293512   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.293866   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:53.793588   40272 type.go:168] "Request Body" body=""
	I1202 19:23:53.793680   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:53.794035   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:54.293418   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.293495   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.293813   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:54.293865   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:54.793520   40272 type.go:168] "Request Body" body=""
	I1202 19:23:54.793596   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:54.793936   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.293437   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.293520   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.293871   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:55.793440   40272 type.go:168] "Request Body" body=""
	I1202 19:23:55.793521   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:55.793859   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:56.293555   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.293632   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.293967   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:56.294027   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:56.793450   40272 type.go:168] "Request Body" body=""
	I1202 19:23:56.793532   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:56.793861   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.293744   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.293822   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.294076   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:57.794034   40272 type.go:168] "Request Body" body=""
	I1202 19:23:57.794109   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:57.794429   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:58.294164   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.294240   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.294551   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:23:58.294605   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:23:58.794324   40272 type.go:168] "Request Body" body=""
	I1202 19:23:58.794395   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:58.794640   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.293351   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.293426   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.293726   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:23:59.793448   40272 type.go:168] "Request Body" body=""
	I1202 19:23:59.793529   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:23:59.793894   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:00.301671   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.301760   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.302092   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:00.302138   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:00.793439   40272 type.go:168] "Request Body" body=""
	I1202 19:24:00.793509   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:00.793888   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.293581   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.293683   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.294068   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:01.793443   40272 type.go:168] "Request Body" body=""
	I1202 19:24:01.793546   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:01.793928   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.293497   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.293633   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.293968   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:02.793760   40272 type.go:168] "Request Body" body=""
	I1202 19:24:02.793866   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:02.794174   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:02.794228   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:03.293986   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.294063   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.296865   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1202 19:24:03.793562   40272 type.go:168] "Request Body" body=""
	I1202 19:24:03.793640   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:03.793994   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.293692   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.293763   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.294056   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:04.793427   40272 type.go:168] "Request Body" body=""
	I1202 19:24:04.793498   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:04.793833   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:05.293536   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.293614   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.293970   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 19:24:05.294030   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:05.793675   40272 type.go:168] "Request Body" body=""
	I1202 19:24:05.793742   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:05.794044   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.293762   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.293838   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.294177   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:06.793966   40272 type.go:168] "Request Body" body=""
	I1202 19:24:06.794048   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:06.794391   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:07.294030   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.294116   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.298234   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1202 19:24:07.301805   40272 node_ready.go:55] error getting node "functional-374330" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-374330": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 19:24:07.793594   40272 type.go:168] "Request Body" body=""
	I1202 19:24:07.793690   40272 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-374330" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 19:24:07.794025   40272 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 19:24:08.293448   40272 type.go:168] "Request Body" body=""
	I1202 19:24:08.293509   40272 node_ready.go:38] duration metric: took 6m0.000285031s for node "functional-374330" to be "Ready" ...
	I1202 19:24:08.296878   40272 out.go:203] 
	W1202 19:24:08.299748   40272 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:24:08.299768   40272 out.go:285] * 
	W1202 19:24:08.301915   40272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:24:08.304698   40272 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.908352277Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a5dfc978-249b-4528-9b21-d3c4ee472325 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931233039Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931390419Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:16 functional-374330 crio[6021]: time="2025-12-02T19:24:16.931446171Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bdb08d1d-8c4b-47a8-b2ed-c9dd43b633f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.970158853Z" level=info msg="Checking image status: minikube-local-cache-test:functional-374330" id=0b5368ba-8f6d-4e19-906a-14804a93f070 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.993766829Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-374330" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.99391506Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-374330 not found" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:17 functional-374330 crio[6021]: time="2025-12-02T19:24:17.993958694Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-374330 found" id=3d861562-348c-4174-87ed-c4d8441bfac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.017877187Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-374330" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.018074967Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-374330 not found" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.018119906Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-374330 found" id=df622593-ed34-431d-8945-501a9d654e45 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:18 functional-374330 crio[6021]: time="2025-12-02T19:24:18.811087426Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0a6bce0d-0ca4-4958-96ca-78901794ebdd name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.124897937Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.125029421Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.125068042Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=109a28b2-e4d1-4e84-af3d-b28b7f9f9551 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.739571519Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.739703283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.73976747Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5736aac7-7cc6-429a-a247-7e3e5426e664 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.763152666Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.763304359Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.7633406Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=cccd7a00-ef27-4177-912a-20c68be12228 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787265386Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787441949Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:19 functional-374330 crio[6021]: time="2025-12-02T19:24:19.787498095Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=2cf441fc-1ada-487a-9149-e053ded11254 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:24:20 functional-374330 crio[6021]: time="2025-12-02T19:24:20.32401712Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3126e1a1-7a3d-4dfc-8d4b-cf9d8bcb12d4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:24:24.172076   10171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:24.172833   10171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:24.174496   10171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:24.174821   10171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:24:24.176375   10171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:24:24 up  1:06,  0 user,  load average: 0.10, 0.21, 0.32
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:24:21 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:22 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 02 19:24:22 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:22 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:22 functional-374330 kubelet[10047]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:22 functional-374330 kubelet[10047]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:22 functional-374330 kubelet[10047]: E1202 19:24:22.375510   10047 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:22 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:22 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 02 19:24:23 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:23 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:23 functional-374330 kubelet[10067]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:23 functional-374330 kubelet[10067]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:23 functional-374330 kubelet[10067]: E1202 19:24:23.114501   10067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 02 19:24:23 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:23 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:24:23 functional-374330 kubelet[10089]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:23 functional-374330 kubelet[10089]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:24:23 functional-374330 kubelet[10089]: E1202 19:24:23.836704   10089 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:24:23 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (327.964786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 19:26:57.357108    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:28:46.175987    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:30:00.445639    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:30:09.242623    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:31:57.357381    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:33:46.179195    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.241661981s)

                                                
                                                
-- stdout --
	* [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443729s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.243697113s for "functional-374330" cluster.
I1202 19:36:37.433386    4470 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (293.049401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                            │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                              │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:latest                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add minikube-local-cache-test:functional-374330                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache delete minikube-local-cache-test:functional-374330                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl images                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ cache          │ functional-374330 cache reload                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ kubectl        │ functional-374330 kubectl -- --context functional-374330 get pods                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ start          │ -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:24:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:24:25.235145   46141 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:24:25.235262   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235266   46141 out.go:374] Setting ErrFile to fd 2...
	I1202 19:24:25.235270   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235501   46141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:24:25.235832   46141 out.go:368] Setting JSON to false
	I1202 19:24:25.236657   46141 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4004,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:24:25.236712   46141 start.go:143] virtualization:  
	I1202 19:24:25.240137   46141 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:24:25.243026   46141 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:24:25.243116   46141 notify.go:221] Checking for updates...
	I1202 19:24:25.249453   46141 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:24:25.252235   46141 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:24:25.255042   46141 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:24:25.257985   46141 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:24:25.260839   46141 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:24:25.264178   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:25.264323   46141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:24:25.284942   46141 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:24:25.285038   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.377890   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.369067605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.377983   46141 docker.go:319] overlay module found
	I1202 19:24:25.380979   46141 out.go:179] * Using the docker driver based on existing profile
	I1202 19:24:25.383947   46141 start.go:309] selected driver: docker
	I1202 19:24:25.383955   46141 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.384041   46141 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:24:25.384143   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.448724   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.440009169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.449135   46141 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:24:25.449156   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:25.449204   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:25.449250   46141 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.452291   46141 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:24:25.455020   46141 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:24:25.457907   46141 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:24:25.460700   46141 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:24:25.460741   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:25.479854   46141 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:24:25.479865   46141 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:24:25.525268   46141 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:24:25.722344   46141 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:24:25.722516   46141 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:24:25.722575   46141 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722662   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:24:25.722674   46141 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.293µs
	I1202 19:24:25.722687   46141 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:24:25.722699   46141 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722728   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:24:25.722732   46141 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 34.97µs
	I1202 19:24:25.722737   46141 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722755   46141 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722765   46141 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:24:25.722787   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:24:25.722792   46141 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722800   46141 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.388µs
	I1202 19:24:25.722806   46141 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722816   46141 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722833   46141 start.go:364] duration metric: took 28.102µs to acquireMachinesLock for "functional-374330"
	I1202 19:24:25.722844   46141 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:24:25.722848   46141 fix.go:54] fixHost starting: 
	I1202 19:24:25.722868   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:24:25.722874   46141 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 59.51µs
	I1202 19:24:25.722879   46141 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722888   46141 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722914   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:24:25.722918   46141 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.859µs
	I1202 19:24:25.722926   46141 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722934   46141 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722961   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:24:25.722965   46141 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.041µs
	I1202 19:24:25.722969   46141 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:24:25.722984   46141 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723013   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:24:25.723018   46141 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.477µs
	I1202 19:24:25.723022   46141 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:24:25.723030   46141 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723054   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:24:25.723058   46141 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.956µs
	I1202 19:24:25.723062   46141 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:24:25.723069   46141 cache.go:87] Successfully saved all images to host disk.
	I1202 19:24:25.723135   46141 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:24:25.740024   46141 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:24:25.740043   46141 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:24:25.743422   46141 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:24:25.743444   46141 machine.go:94] provisionDockerMachine start ...
	I1202 19:24:25.743520   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.759952   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.760267   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.760274   46141 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:24:25.913242   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:25.913255   46141 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:24:25.913315   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.930816   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.931108   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.931116   46141 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:24:26.092717   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:26.092791   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.112703   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.112993   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.113006   46141 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:24:26.261761   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:24:26.261776   46141 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:24:26.261797   46141 ubuntu.go:190] setting up certificates
	I1202 19:24:26.261807   46141 provision.go:84] configureAuth start
	I1202 19:24:26.261862   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:26.279208   46141 provision.go:143] copyHostCerts
	I1202 19:24:26.279270   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:24:26.279282   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:24:26.279355   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:24:26.279450   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:24:26.279454   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:24:26.279478   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:24:26.279560   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:24:26.279563   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:24:26.279586   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:24:26.279633   46141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:24:26.509539   46141 provision.go:177] copyRemoteCerts
	I1202 19:24:26.509599   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:24:26.509644   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.526423   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:26.629290   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:24:26.645497   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:24:26.662152   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:24:26.678745   46141 provision.go:87] duration metric: took 416.916855ms to configureAuth
	I1202 19:24:26.678762   46141 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:24:26.678944   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:26.679035   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.696214   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.696565   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.696576   46141 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:24:27.030556   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:24:27.030570   46141 machine.go:97] duration metric: took 1.287120124s to provisionDockerMachine
	I1202 19:24:27.030580   46141 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:24:27.030591   46141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:24:27.030695   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:24:27.030734   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.047988   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.153876   46141 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:24:27.157492   46141 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:24:27.157509   46141 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:24:27.157519   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:24:27.157573   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:24:27.157644   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:24:27.157766   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:24:27.157814   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:24:27.165310   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:27.182588   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:24:27.199652   46141 start.go:296] duration metric: took 169.058439ms for postStartSetup
	I1202 19:24:27.199721   46141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:24:27.199772   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.216431   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.322237   46141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:24:27.326538   46141 fix.go:56] duration metric: took 1.603683597s for fixHost
	I1202 19:24:27.326551   46141 start.go:83] releasing machines lock for "functional-374330", held for 1.603712807s
	I1202 19:24:27.326613   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:27.342449   46141 ssh_runner.go:195] Run: cat /version.json
	I1202 19:24:27.342488   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.342715   46141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:24:27.342781   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.364991   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.373848   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.555572   46141 ssh_runner.go:195] Run: systemctl --version
	I1202 19:24:27.562641   46141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:24:27.610413   46141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:24:27.614481   46141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:24:27.614543   46141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:24:27.622250   46141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:24:27.622263   46141 start.go:496] detecting cgroup driver to use...
	I1202 19:24:27.622291   46141 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:24:27.622334   46141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:24:27.637407   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:24:27.650559   46141 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:24:27.650610   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:24:27.665862   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:24:27.678201   46141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:24:27.787007   46141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:24:27.899090   46141 docker.go:234] disabling docker service ...
	I1202 19:24:27.899177   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:24:27.914485   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:24:27.927681   46141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:24:28.045412   46141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:24:28.177124   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:24:28.189334   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:24:28.202961   46141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:24:28.203015   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.211343   46141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:24:28.211423   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.219933   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.227929   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.236036   46141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:24:28.243301   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.251359   46141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.259074   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.267235   46141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:24:28.274309   46141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:24:28.280789   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.409376   46141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:24:28.552601   46141 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:24:28.552676   46141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:24:28.556545   46141 start.go:564] Will wait 60s for crictl version
	I1202 19:24:28.556594   46141 ssh_runner.go:195] Run: which crictl
	I1202 19:24:28.560016   46141 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:24:28.584096   46141 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:24:28.584179   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.612035   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.644724   46141 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:24:28.647719   46141 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:24:28.663830   46141 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:24:28.670469   46141 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 19:24:28.673257   46141 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:24:28.673378   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:28.673715   46141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:24:28.712979   46141 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:24:28.712990   46141 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:24:28.712996   46141 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:24:28.713091   46141 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:24:28.713167   46141 ssh_runner.go:195] Run: crio config
	I1202 19:24:28.766896   46141 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 19:24:28.766918   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:28.766927   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:28.766941   46141 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:24:28.766963   46141 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:24:28.767080   46141 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:24:28.767147   46141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:24:28.774515   46141 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:24:28.774573   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:24:28.781818   46141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:24:28.793879   46141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:24:28.805690   46141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 19:24:28.818120   46141 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:24:28.821584   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.923612   46141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:24:29.044163   46141 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:24:29.044174   46141 certs.go:195] generating shared ca certs ...
	I1202 19:24:29.044188   46141 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:24:29.044325   46141 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:24:29.044362   46141 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:24:29.044367   46141 certs.go:257] generating profile certs ...
	I1202 19:24:29.044449   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:24:29.044505   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:24:29.044543   46141 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:24:29.044646   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:24:29.044677   46141 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:24:29.044683   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:24:29.044708   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:24:29.044730   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:24:29.044752   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:24:29.044793   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:29.045393   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:24:29.065539   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:24:29.085818   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:24:29.107933   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:24:29.124745   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:24:29.141714   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:24:29.158359   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:24:29.174925   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:24:29.191660   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:24:29.208637   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:24:29.226113   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:24:29.242250   46141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:24:29.254421   46141 ssh_runner.go:195] Run: openssl version
	I1202 19:24:29.260244   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:24:29.267946   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271417   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271472   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.312066   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:24:29.319673   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:24:29.327613   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331149   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331213   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.371529   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:24:29.378966   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:24:29.386811   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390484   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390535   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.430996   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:24:29.438578   46141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:24:29.442282   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:24:29.482760   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:24:29.523856   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:24:29.564389   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:24:29.604810   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:24:29.645380   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:24:29.687886   46141 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:29.687963   46141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:24:29.688021   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.717432   46141 cri.go:89] found id: ""
	I1202 19:24:29.717490   46141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:24:29.725067   46141 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:24:29.725077   46141 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:24:29.725126   46141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:24:29.732065   46141 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.732614   46141 kubeconfig.go:125] found "functional-374330" server: "https://192.168.49.2:8441"
	I1202 19:24:29.734000   46141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:24:29.741333   46141 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 19:09:53.796915722 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 19:24:28.810106590 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 19:24:29.741350   46141 kubeadm.go:1161] stopping kube-system containers ...
	I1202 19:24:29.741369   46141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 19:24:29.741422   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.768496   46141 cri.go:89] found id: ""
	I1202 19:24:29.768555   46141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 19:24:29.784309   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:24:29.792418   46141 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  2 19:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 19:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  2 19:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 19:14 /etc/kubernetes/scheduler.conf
	
	I1202 19:24:29.792472   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:24:29.800190   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:24:29.807339   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.807391   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:24:29.814250   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.821376   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.821427   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.828870   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:24:29.836580   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.836638   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:24:29.843919   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:24:29.851701   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:29.899912   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.003595   46141 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.103659313s)
	I1202 19:24:31.003654   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.210419   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.280327   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.324104   46141 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:24:31.324170   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:31.824388   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.324845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.825182   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.824654   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.325193   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.825112   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.324714   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.824303   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.324356   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.824683   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.324294   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.824358   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.324922   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.824376   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.324270   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.825008   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.324553   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.824838   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.325254   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.824311   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.324452   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.824362   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.325153   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.824379   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.324948   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.824287   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.325093   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.824914   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.324315   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.825135   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.324688   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.824319   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.325046   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.824341   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.324306   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.824985   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.324502   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.825062   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.325159   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.824329   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.324431   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.824365   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.324584   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.824229   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.324898   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.825268   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.324621   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.824623   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.325215   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.824326   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.324724   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.824643   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.325213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.824317   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.324263   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.824993   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.324689   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.824372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.324768   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.824973   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.324385   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.824324   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.325090   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.824792   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.825092   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.324727   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.825067   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.325261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.824374   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.825117   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.824931   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.824858   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.324555   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.824370   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.824824   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.325272   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.824975   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.324579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.824349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.324992   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.824471   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.325189   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.824307   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.324299   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.824860   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.324477   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.824853   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.324910   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.825002   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.324312   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.824665   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.324238   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.824261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.325216   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.824750   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.324310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.825285   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.325114   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.824701   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.324390   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.825161   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.325162   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.824364   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.324725   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.825185   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.324377   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.825213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.324403   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.824310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.324960   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.824818   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.325151   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.824591   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:31.324373   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:31.324449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:31.353616   46141 cri.go:89] found id: ""
	I1202 19:25:31.353629   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.353636   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:31.353642   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:31.353718   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:31.378636   46141 cri.go:89] found id: ""
	I1202 19:25:31.378649   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.378656   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:31.378661   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:31.378716   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:31.403292   46141 cri.go:89] found id: ""
	I1202 19:25:31.403305   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.403312   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:31.403317   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:31.403371   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:31.427054   46141 cri.go:89] found id: ""
	I1202 19:25:31.427067   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.427074   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:31.427079   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:31.427133   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:31.451516   46141 cri.go:89] found id: ""
	I1202 19:25:31.451529   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.451536   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:31.451541   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:31.451595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:31.474863   46141 cri.go:89] found id: ""
	I1202 19:25:31.474876   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.474889   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:31.474895   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:31.474967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:31.499414   46141 cri.go:89] found id: ""
	I1202 19:25:31.499427   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.499434   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:31.499442   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:31.499454   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:31.563997   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:31.564014   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:31.575066   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:31.575080   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:31.644130   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:31.644152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:31.644164   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:31.720566   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:31.720584   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:34.247873   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:34.257765   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:34.257820   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:34.284109   46141 cri.go:89] found id: ""
	I1202 19:25:34.284122   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.284129   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:34.284134   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:34.284185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:34.322934   46141 cri.go:89] found id: ""
	I1202 19:25:34.322947   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.322954   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:34.322959   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:34.323011   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:34.356765   46141 cri.go:89] found id: ""
	I1202 19:25:34.356778   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.356785   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:34.356790   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:34.356843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:34.383799   46141 cri.go:89] found id: ""
	I1202 19:25:34.383811   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.383818   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:34.383824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:34.383875   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:34.407104   46141 cri.go:89] found id: ""
	I1202 19:25:34.407117   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.407133   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:34.407139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:34.407207   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:34.431504   46141 cri.go:89] found id: ""
	I1202 19:25:34.431517   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.431523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:34.431529   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:34.431624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:34.459463   46141 cri.go:89] found id: ""
	I1202 19:25:34.459477   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.459484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:34.459492   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:34.459503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:34.524752   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:34.524770   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:34.537010   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:34.537025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:34.599686   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:34.599696   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:34.599708   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:34.676464   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:34.676483   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.209911   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:37.219636   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:37.219691   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:37.243765   46141 cri.go:89] found id: ""
	I1202 19:25:37.243778   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.243785   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:37.243790   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:37.243842   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:37.272015   46141 cri.go:89] found id: ""
	I1202 19:25:37.272028   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.272035   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:37.272040   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:37.272096   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:37.296807   46141 cri.go:89] found id: ""
	I1202 19:25:37.296819   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.296835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:37.296840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:37.296893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:37.327436   46141 cri.go:89] found id: ""
	I1202 19:25:37.327449   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.327456   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:37.327461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:37.327515   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:37.362906   46141 cri.go:89] found id: ""
	I1202 19:25:37.362919   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.362926   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:37.362931   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:37.362985   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:37.386876   46141 cri.go:89] found id: ""
	I1202 19:25:37.386889   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.386896   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:37.386902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:37.386976   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:37.410131   46141 cri.go:89] found id: ""
	I1202 19:25:37.410144   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.410151   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:37.410158   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:37.410169   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:37.420302   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:37.420317   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:37.483848   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:37.483857   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:37.483867   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:37.562871   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:37.562889   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.593595   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:37.593609   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.162349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:40.172453   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:40.172514   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:40.199726   46141 cri.go:89] found id: ""
	I1202 19:25:40.199756   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.199763   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:40.199768   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:40.199825   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:40.229015   46141 cri.go:89] found id: ""
	I1202 19:25:40.229029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.229037   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:40.229042   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:40.229097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:40.255016   46141 cri.go:89] found id: ""
	I1202 19:25:40.255029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.255036   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:40.255041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:40.255104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:40.280314   46141 cri.go:89] found id: ""
	I1202 19:25:40.280337   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.280343   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:40.280349   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:40.280409   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:40.317261   46141 cri.go:89] found id: ""
	I1202 19:25:40.317275   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.317281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:40.317286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:40.317351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:40.350568   46141 cri.go:89] found id: ""
	I1202 19:25:40.350581   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.350588   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:40.350602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:40.350655   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:40.376758   46141 cri.go:89] found id: ""
	I1202 19:25:40.376772   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.376786   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:40.376794   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:40.376805   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:40.452695   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:40.452719   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:40.478860   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:40.478875   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.558280   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:40.558307   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:40.569138   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:40.569159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:40.633967   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.135632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:43.145532   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:43.145592   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:43.170325   46141 cri.go:89] found id: ""
	I1202 19:25:43.170338   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.170345   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:43.170372   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:43.170432   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:43.194956   46141 cri.go:89] found id: ""
	I1202 19:25:43.194970   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.194977   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:43.194982   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:43.195039   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:43.221778   46141 cri.go:89] found id: ""
	I1202 19:25:43.221792   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.221800   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:43.221805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:43.221862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:43.248205   46141 cri.go:89] found id: ""
	I1202 19:25:43.248218   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.248225   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:43.248230   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:43.248283   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:43.275958   46141 cri.go:89] found id: ""
	I1202 19:25:43.275971   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.275979   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:43.275984   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:43.276040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:43.311994   46141 cri.go:89] found id: ""
	I1202 19:25:43.312006   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.312013   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:43.312018   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:43.312070   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:43.338867   46141 cri.go:89] found id: ""
	I1202 19:25:43.338881   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.338888   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:43.338896   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:43.338907   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:43.370951   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:43.370966   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:43.439006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:43.439023   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:43.449811   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:43.449827   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:43.523274   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.523283   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:43.523293   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.099316   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:46.109738   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:46.109799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:46.135973   46141 cri.go:89] found id: ""
	I1202 19:25:46.135986   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.135993   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:46.135998   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:46.136053   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:46.160433   46141 cri.go:89] found id: ""
	I1202 19:25:46.160447   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.160454   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:46.160459   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:46.160562   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:46.185345   46141 cri.go:89] found id: ""
	I1202 19:25:46.185358   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.185365   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:46.185371   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:46.185431   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:46.209708   46141 cri.go:89] found id: ""
	I1202 19:25:46.209721   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.209728   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:46.209733   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:46.209799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:46.234274   46141 cri.go:89] found id: ""
	I1202 19:25:46.234288   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.234294   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:46.234299   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:46.234363   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:46.259257   46141 cri.go:89] found id: ""
	I1202 19:25:46.259271   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.259277   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:46.259282   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:46.259336   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:46.282587   46141 cri.go:89] found id: ""
	I1202 19:25:46.282601   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.282607   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:46.282620   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:46.282630   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:46.360010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:46.360029   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:46.360040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.435864   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:46.435883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:46.464582   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:46.464597   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:46.531766   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:46.531784   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.042500   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:49.053773   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:49.053830   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:49.079262   46141 cri.go:89] found id: ""
	I1202 19:25:49.079276   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.079282   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:49.079288   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:49.079342   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:49.104725   46141 cri.go:89] found id: ""
	I1202 19:25:49.104738   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.104745   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:49.104759   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:49.104814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:49.133788   46141 cri.go:89] found id: ""
	I1202 19:25:49.133801   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.133808   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:49.133824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:49.133880   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:49.159349   46141 cri.go:89] found id: ""
	I1202 19:25:49.159371   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.159379   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:49.159384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:49.159443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:49.197548   46141 cri.go:89] found id: ""
	I1202 19:25:49.197562   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.197569   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:49.197574   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:49.197641   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:49.223472   46141 cri.go:89] found id: ""
	I1202 19:25:49.223485   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.223492   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:49.223498   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:49.223558   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:49.247894   46141 cri.go:89] found id: ""
	I1202 19:25:49.247921   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.247929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:49.247936   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:49.247949   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:49.331462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:49.331482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:49.370297   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:49.370316   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:49.439052   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:49.439071   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.449975   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:49.449991   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:49.513463   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.015209   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:52.026897   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:52.026956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:52.053387   46141 cri.go:89] found id: ""
	I1202 19:25:52.053401   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.053408   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:52.053416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:52.053475   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:52.079773   46141 cri.go:89] found id: ""
	I1202 19:25:52.079787   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.079793   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:52.079799   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:52.079854   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:52.107526   46141 cri.go:89] found id: ""
	I1202 19:25:52.107539   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.107546   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:52.107551   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:52.107610   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:52.134040   46141 cri.go:89] found id: ""
	I1202 19:25:52.134054   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.134061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:52.134066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:52.134124   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:52.160401   46141 cri.go:89] found id: ""
	I1202 19:25:52.160421   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.160445   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:52.160450   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:52.160512   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:52.186015   46141 cri.go:89] found id: ""
	I1202 19:25:52.186029   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.186035   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:52.186041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:52.186097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:52.211315   46141 cri.go:89] found id: ""
	I1202 19:25:52.211328   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.211335   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:52.211342   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:52.211352   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:52.281330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:52.281350   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:52.294618   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:52.294634   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:52.375867   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.375884   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:52.375895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:52.454410   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:52.454433   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:54.985073   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:54.997287   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:54.997351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:55.033193   46141 cri.go:89] found id: ""
	I1202 19:25:55.033207   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.033214   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:55.033220   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:55.033285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:55.059947   46141 cri.go:89] found id: ""
	I1202 19:25:55.059961   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.059968   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:55.059973   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:55.060032   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:55.089719   46141 cri.go:89] found id: ""
	I1202 19:25:55.089731   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.089738   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:55.089744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:55.089804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:55.116791   46141 cri.go:89] found id: ""
	I1202 19:25:55.116805   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.116811   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:55.116816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:55.116872   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:55.144575   46141 cri.go:89] found id: ""
	I1202 19:25:55.144589   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.144597   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:55.144602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:55.144663   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:55.170532   46141 cri.go:89] found id: ""
	I1202 19:25:55.170546   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.170553   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:55.170558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:55.170613   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:55.201295   46141 cri.go:89] found id: ""
	I1202 19:25:55.201309   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.201317   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:55.201324   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:55.201335   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:55.265951   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:55.265968   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:55.276457   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:55.276472   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:55.358449   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:55.358470   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:55.358481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:55.438382   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:55.438401   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:57.969884   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:57.980234   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:57.980287   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:58.005151   46141 cri.go:89] found id: ""
	I1202 19:25:58.005165   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.005172   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:58.005177   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:58.005234   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:58.032254   46141 cri.go:89] found id: ""
	I1202 19:25:58.032267   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.032274   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:58.032279   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:58.032338   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:58.058556   46141 cri.go:89] found id: ""
	I1202 19:25:58.058570   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.058578   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:58.058583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:58.058640   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:58.084123   46141 cri.go:89] found id: ""
	I1202 19:25:58.084136   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.084143   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:58.084148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:58.084204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:58.110792   46141 cri.go:89] found id: ""
	I1202 19:25:58.110806   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.110812   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:58.110820   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:58.110877   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:58.136499   46141 cri.go:89] found id: ""
	I1202 19:25:58.136512   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.136519   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:58.136524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:58.136585   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:58.162083   46141 cri.go:89] found id: ""
	I1202 19:25:58.162096   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.162104   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:58.162111   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:58.162121   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:58.223736   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:58.223745   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:58.223756   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:58.308033   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:58.308051   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:58.341126   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:58.341141   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:58.407826   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:58.407843   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:00.920333   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:00.930302   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:00.930359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:00.954390   46141 cri.go:89] found id: ""
	I1202 19:26:00.954404   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.954411   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:00.954416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:00.954483   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:00.980266   46141 cri.go:89] found id: ""
	I1202 19:26:00.980280   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.980287   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:00.980292   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:00.980360   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:01.008460   46141 cri.go:89] found id: ""
	I1202 19:26:01.008482   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.008488   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:01.008493   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:01.008547   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:01.036672   46141 cri.go:89] found id: ""
	I1202 19:26:01.036686   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.036692   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:01.036698   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:01.036753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:01.061548   46141 cri.go:89] found id: ""
	I1202 19:26:01.061562   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.061568   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:01.061573   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:01.061629   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:01.086617   46141 cri.go:89] found id: ""
	I1202 19:26:01.086631   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.086638   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:01.086643   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:01.086701   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:01.111676   46141 cri.go:89] found id: ""
	I1202 19:26:01.111690   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.111697   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:01.111704   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:01.111714   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:01.176991   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:01.177017   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:01.188305   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:01.188339   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:01.254955   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:01.254966   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:01.254977   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:01.336825   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:01.336852   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:03.866716   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:03.876694   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:03.876752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:03.900150   46141 cri.go:89] found id: ""
	I1202 19:26:03.900164   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.900170   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:03.900176   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:03.900231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:03.928045   46141 cri.go:89] found id: ""
	I1202 19:26:03.928059   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.928066   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:03.928071   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:03.928128   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:03.952359   46141 cri.go:89] found id: ""
	I1202 19:26:03.952372   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.952379   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:03.952384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:03.952439   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:03.977113   46141 cri.go:89] found id: ""
	I1202 19:26:03.977127   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.977134   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:03.977139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:03.977195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:04.001871   46141 cri.go:89] found id: ""
	I1202 19:26:04.001884   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.001890   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:04.001896   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:04.001950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:04.029122   46141 cri.go:89] found id: ""
	I1202 19:26:04.029136   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.029143   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:04.029148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:04.029206   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:04.059191   46141 cri.go:89] found id: ""
	I1202 19:26:04.059205   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.059212   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:04.059219   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:04.059228   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:04.125149   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:04.125166   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:04.136144   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:04.136159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:04.198077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:04.198088   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:04.198098   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:04.273217   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:04.273235   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:06.807224   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:06.817250   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:06.817318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:06.845880   46141 cri.go:89] found id: ""
	I1202 19:26:06.845895   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.845902   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:06.845908   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:06.845963   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:06.870846   46141 cri.go:89] found id: ""
	I1202 19:26:06.870859   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.870866   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:06.870871   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:06.870927   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:06.896774   46141 cri.go:89] found id: ""
	I1202 19:26:06.896788   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.896794   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:06.896800   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:06.896857   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:06.924394   46141 cri.go:89] found id: ""
	I1202 19:26:06.924407   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.924414   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:06.924419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:06.924477   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:06.951775   46141 cri.go:89] found id: ""
	I1202 19:26:06.951789   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.951796   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:06.951804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:06.951865   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:06.976656   46141 cri.go:89] found id: ""
	I1202 19:26:06.976674   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.976682   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:06.976687   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:06.976743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:07.002712   46141 cri.go:89] found id: ""
	I1202 19:26:07.002726   46141 logs.go:282] 0 containers: []
	W1202 19:26:07.002741   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:07.002753   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:07.002764   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:07.071978   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:07.071988   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:07.072001   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:07.148506   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:07.148525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:07.177526   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:07.177542   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:07.244597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:07.244614   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:09.755980   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:09.766062   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:09.766136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:09.791272   46141 cri.go:89] found id: ""
	I1202 19:26:09.791285   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.791292   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:09.791297   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:09.791352   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:09.819809   46141 cri.go:89] found id: ""
	I1202 19:26:09.819822   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.819829   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:09.819834   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:09.819890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:09.845138   46141 cri.go:89] found id: ""
	I1202 19:26:09.845151   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.845158   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:09.845163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:09.845233   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:09.869181   46141 cri.go:89] found id: ""
	I1202 19:26:09.869194   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.869201   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:09.869215   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:09.869269   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:09.894166   46141 cri.go:89] found id: ""
	I1202 19:26:09.894180   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.894187   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:09.894192   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:09.894246   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:09.918581   46141 cri.go:89] found id: ""
	I1202 19:26:09.918594   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.918601   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:09.918606   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:09.918670   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:09.943199   46141 cri.go:89] found id: ""
	I1202 19:26:09.943213   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.943219   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:09.943227   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:09.943238   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:10.008528   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:10.008545   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:10.019265   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:10.019283   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:10.097788   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:10.097798   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:10.097814   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:10.175343   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:10.175361   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:12.705105   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:12.714930   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:12.714992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:12.738794   46141 cri.go:89] found id: ""
	I1202 19:26:12.738808   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.738814   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:12.738819   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:12.738893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:12.763061   46141 cri.go:89] found id: ""
	I1202 19:26:12.763074   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.763088   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:12.763094   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:12.763147   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:12.789884   46141 cri.go:89] found id: ""
	I1202 19:26:12.789897   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.789904   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:12.789909   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:12.789967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:12.815897   46141 cri.go:89] found id: ""
	I1202 19:26:12.815911   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.815918   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:12.815923   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:12.815980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:12.842434   46141 cri.go:89] found id: ""
	I1202 19:26:12.842448   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.842455   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:12.842461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:12.842521   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:12.867046   46141 cri.go:89] found id: ""
	I1202 19:26:12.867059   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.867066   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:12.867071   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:12.867136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:12.891464   46141 cri.go:89] found id: ""
	I1202 19:26:12.891478   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.891484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:12.891492   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:12.891503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:12.902121   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:12.902136   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:12.963892   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:12.963902   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:12.963913   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:13.043923   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:13.043944   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:13.073893   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:13.073909   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:15.646846   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:15.656672   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:15.656727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:15.685223   46141 cri.go:89] found id: ""
	I1202 19:26:15.685236   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.685243   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:15.685249   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:15.685309   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:15.710499   46141 cri.go:89] found id: ""
	I1202 19:26:15.710513   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.710520   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:15.710526   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:15.710582   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:15.734748   46141 cri.go:89] found id: ""
	I1202 19:26:15.734762   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.734775   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:15.734780   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:15.734833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:15.759539   46141 cri.go:89] found id: ""
	I1202 19:26:15.759551   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.759558   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:15.759564   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:15.759617   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:15.788358   46141 cri.go:89] found id: ""
	I1202 19:26:15.788371   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.788378   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:15.788383   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:15.788443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:15.813365   46141 cri.go:89] found id: ""
	I1202 19:26:15.813379   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.813386   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:15.813391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:15.813445   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:15.842535   46141 cri.go:89] found id: ""
	I1202 19:26:15.842550   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.842558   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:15.842565   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:15.842576   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:15.853891   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:15.853906   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:15.921614   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:15.921632   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:15.921643   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:15.997309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:15.997326   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:16.029023   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:16.029039   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.596080   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:18.605748   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:18.605804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:18.630525   46141 cri.go:89] found id: ""
	I1202 19:26:18.630539   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.630546   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:18.630551   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:18.630608   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:18.655399   46141 cri.go:89] found id: ""
	I1202 19:26:18.655412   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.655419   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:18.655425   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:18.655479   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:18.681041   46141 cri.go:89] found id: ""
	I1202 19:26:18.681054   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.681061   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:18.681067   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:18.681123   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:18.710155   46141 cri.go:89] found id: ""
	I1202 19:26:18.710168   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.710181   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:18.710187   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:18.710241   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:18.735242   46141 cri.go:89] found id: ""
	I1202 19:26:18.735256   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.735263   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:18.735268   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:18.735327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:18.761061   46141 cri.go:89] found id: ""
	I1202 19:26:18.761074   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.761081   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:18.761087   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:18.761149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:18.788428   46141 cri.go:89] found id: ""
	I1202 19:26:18.788441   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.788448   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:18.788456   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:18.788475   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:18.822471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:18.822487   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.888827   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:18.888844   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:18.899937   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:18.899952   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:18.968344   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:18.968353   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:18.968365   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.544554   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:21.555728   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:21.555784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:21.584623   46141 cri.go:89] found id: ""
	I1202 19:26:21.584639   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.584646   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:21.584650   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:21.584710   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:21.614647   46141 cri.go:89] found id: ""
	I1202 19:26:21.614660   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.614668   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:21.614672   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:21.614731   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:21.642925   46141 cri.go:89] found id: ""
	I1202 19:26:21.642938   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.642945   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:21.642950   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:21.643003   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:21.668180   46141 cri.go:89] found id: ""
	I1202 19:26:21.668194   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.668202   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:21.668207   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:21.668263   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:21.693295   46141 cri.go:89] found id: ""
	I1202 19:26:21.693308   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.693315   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:21.693321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:21.693375   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:21.720442   46141 cri.go:89] found id: ""
	I1202 19:26:21.720456   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.720463   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:21.720477   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:21.720550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:21.745858   46141 cri.go:89] found id: ""
	I1202 19:26:21.745872   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.745879   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:21.745887   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:21.745898   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.821815   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:21.821832   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:21.852228   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:21.852243   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:21.925590   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:21.925615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:21.936630   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:21.936646   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:22.000893   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:24.501139   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:24.511236   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:24.511298   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:24.536070   46141 cri.go:89] found id: ""
	I1202 19:26:24.536084   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.536091   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:24.536096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:24.536152   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:24.570105   46141 cri.go:89] found id: ""
	I1202 19:26:24.570118   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.570125   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:24.570131   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:24.570195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:24.602200   46141 cri.go:89] found id: ""
	I1202 19:26:24.602213   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.602220   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:24.602225   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:24.602286   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:24.627716   46141 cri.go:89] found id: ""
	I1202 19:26:24.627730   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.627737   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:24.627743   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:24.627799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:24.653555   46141 cri.go:89] found id: ""
	I1202 19:26:24.653568   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.653575   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:24.653580   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:24.653638   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:24.681296   46141 cri.go:89] found id: ""
	I1202 19:26:24.681310   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.681316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:24.681322   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:24.681376   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:24.707692   46141 cri.go:89] found id: ""
	I1202 19:26:24.707705   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.707714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:24.707721   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:24.707731   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:24.782015   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:24.782033   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:24.809710   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:24.809725   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:24.880042   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:24.880061   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:24.890565   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:24.890580   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:24.952416   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.452632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:27.462873   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:27.462933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:27.487753   46141 cri.go:89] found id: ""
	I1202 19:26:27.487766   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.487773   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:27.487778   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:27.487835   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:27.512748   46141 cri.go:89] found id: ""
	I1202 19:26:27.512762   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.512771   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:27.512776   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:27.512833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:27.542024   46141 cri.go:89] found id: ""
	I1202 19:26:27.542038   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.542045   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:27.542051   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:27.542109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:27.579960   46141 cri.go:89] found id: ""
	I1202 19:26:27.579973   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.579979   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:27.579989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:27.580045   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:27.608229   46141 cri.go:89] found id: ""
	I1202 19:26:27.608242   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.608250   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:27.608255   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:27.608318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:27.634613   46141 cri.go:89] found id: ""
	I1202 19:26:27.634626   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.634633   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:27.634639   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:27.634695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:27.659548   46141 cri.go:89] found id: ""
	I1202 19:26:27.659562   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.659569   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:27.659576   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:27.659587   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:27.727694   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.727704   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:27.727715   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:27.802309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:27.802327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:27.831471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:27.831486   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:27.899227   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:27.899244   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.413752   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:30.423684   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:30.423741   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:30.447673   46141 cri.go:89] found id: ""
	I1202 19:26:30.447688   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.447695   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:30.447706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:30.447762   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:30.473178   46141 cri.go:89] found id: ""
	I1202 19:26:30.473191   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.473198   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:30.473203   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:30.473258   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:30.499098   46141 cri.go:89] found id: ""
	I1202 19:26:30.499112   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.499119   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:30.499124   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:30.499181   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:30.528083   46141 cri.go:89] found id: ""
	I1202 19:26:30.528096   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.528103   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:30.528108   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:30.528165   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:30.562772   46141 cri.go:89] found id: ""
	I1202 19:26:30.562784   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.562791   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:30.562796   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:30.562852   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:30.592139   46141 cri.go:89] found id: ""
	I1202 19:26:30.592152   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.592158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:30.592163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:30.592217   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:30.624862   46141 cri.go:89] found id: ""
	I1202 19:26:30.624875   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.624882   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:30.624889   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:30.624901   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.636356   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:30.636374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:30.698721   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:30.698731   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:30.698745   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:30.775221   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:30.775240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:30.812702   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:30.812718   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.383460   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:33.393252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:33.393318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:33.417381   46141 cri.go:89] found id: ""
	I1202 19:26:33.417394   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.417401   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:33.417407   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:33.417467   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:33.441554   46141 cri.go:89] found id: ""
	I1202 19:26:33.441567   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.441574   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:33.441580   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:33.441633   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:33.466601   46141 cri.go:89] found id: ""
	I1202 19:26:33.466615   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.466621   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:33.466627   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:33.466680   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:33.494897   46141 cri.go:89] found id: ""
	I1202 19:26:33.494910   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.494917   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:33.494922   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:33.494978   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:33.519464   46141 cri.go:89] found id: ""
	I1202 19:26:33.519478   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.519485   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:33.519490   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:33.519549   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:33.556189   46141 cri.go:89] found id: ""
	I1202 19:26:33.556203   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.556210   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:33.556216   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:33.556276   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:33.592420   46141 cri.go:89] found id: ""
	I1202 19:26:33.592436   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.592442   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:33.592459   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:33.592469   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:33.669109   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:33.669128   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:33.703954   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:33.703970   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.773221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:33.773240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:33.784054   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:33.784068   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:33.846758   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:36.347013   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:36.357404   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:36.357461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:36.383307   46141 cri.go:89] found id: ""
	I1202 19:26:36.383322   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.383330   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:36.383336   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:36.383391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:36.409566   46141 cri.go:89] found id: ""
	I1202 19:26:36.409580   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.409588   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:36.409593   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:36.409682   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:36.435280   46141 cri.go:89] found id: ""
	I1202 19:26:36.435294   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.435300   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:36.435306   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:36.435366   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:36.460290   46141 cri.go:89] found id: ""
	I1202 19:26:36.460304   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.460310   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:36.460316   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:36.460368   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:36.484719   46141 cri.go:89] found id: ""
	I1202 19:26:36.484733   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.484740   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:36.484746   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:36.484800   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:36.510020   46141 cri.go:89] found id: ""
	I1202 19:26:36.510034   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.510042   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:36.510048   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:36.510106   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:36.536500   46141 cri.go:89] found id: ""
	I1202 19:26:36.536515   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.536521   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:36.536529   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:36.536539   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:36.616617   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:36.616636   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:36.647169   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:36.647185   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:36.711768   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:36.711787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:36.723184   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:36.723200   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:36.795174   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:39.296074   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:39.306024   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:39.306085   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:39.335889   46141 cri.go:89] found id: ""
	I1202 19:26:39.335915   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.335923   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:39.335928   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:39.335990   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:39.361424   46141 cri.go:89] found id: ""
	I1202 19:26:39.361438   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.361445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:39.361450   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:39.361505   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:39.387900   46141 cri.go:89] found id: ""
	I1202 19:26:39.387913   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.387920   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:39.387925   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:39.387988   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:39.413856   46141 cri.go:89] found id: ""
	I1202 19:26:39.413871   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.413878   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:39.413884   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:39.413938   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:39.439194   46141 cri.go:89] found id: ""
	I1202 19:26:39.439208   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.439215   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:39.439221   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:39.439278   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:39.465337   46141 cri.go:89] found id: ""
	I1202 19:26:39.465351   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.465359   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:39.465375   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:39.465442   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:39.493124   46141 cri.go:89] found id: ""
	I1202 19:26:39.493137   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.493144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:39.493152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:39.493162   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:39.573759   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:39.573780   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:39.608655   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:39.608671   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:39.681483   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:39.681503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:39.692678   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:39.692693   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:39.753005   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:42.253264   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:42.266584   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:42.266662   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:42.301576   46141 cri.go:89] found id: ""
	I1202 19:26:42.301591   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.301599   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:42.301605   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:42.301727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:42.360247   46141 cri.go:89] found id: ""
	I1202 19:26:42.360262   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.360269   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:42.360275   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:42.360344   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:42.390741   46141 cri.go:89] found id: ""
	I1202 19:26:42.390756   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.390766   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:42.390776   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:42.390853   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:42.419121   46141 cri.go:89] found id: ""
	I1202 19:26:42.419137   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.419144   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:42.419152   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:42.419225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:42.446778   46141 cri.go:89] found id: ""
	I1202 19:26:42.446792   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.446811   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:42.446816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:42.446884   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:42.472520   46141 cri.go:89] found id: ""
	I1202 19:26:42.472534   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.472541   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:42.472546   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:42.472603   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:42.498770   46141 cri.go:89] found id: ""
	I1202 19:26:42.498783   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.498789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:42.498797   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:42.498806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:42.579006   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:42.579025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:42.609942   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:42.609958   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:42.683995   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:42.684022   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:42.695018   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:42.695038   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:42.757205   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.257372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:45.279258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:45.279391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:45.324360   46141 cri.go:89] found id: ""
	I1202 19:26:45.324374   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.324382   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:45.324389   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:45.324461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:45.357406   46141 cri.go:89] found id: ""
	I1202 19:26:45.357438   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.357445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:45.357451   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:45.357520   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:45.390814   46141 cri.go:89] found id: ""
	I1202 19:26:45.390829   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.390836   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:45.390842   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:45.390910   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:45.422248   46141 cri.go:89] found id: ""
	I1202 19:26:45.422262   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.422269   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:45.422274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:45.422331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:45.447593   46141 cri.go:89] found id: ""
	I1202 19:26:45.447607   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.447614   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:45.447618   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:45.447669   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:45.473750   46141 cri.go:89] found id: ""
	I1202 19:26:45.473763   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.473770   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:45.473775   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:45.473838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:45.502345   46141 cri.go:89] found id: ""
	I1202 19:26:45.502358   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.502364   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:45.502373   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:45.502383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:45.569300   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:45.569319   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:45.581070   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:45.581086   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:45.647631   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.647641   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:45.647652   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:45.722681   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:45.722699   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:48.249966   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:48.259729   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:48.259788   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:48.284968   46141 cri.go:89] found id: ""
	I1202 19:26:48.284981   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.284995   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:48.285001   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:48.285058   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:48.312117   46141 cri.go:89] found id: ""
	I1202 19:26:48.312131   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.312138   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:48.312143   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:48.312196   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:48.338030   46141 cri.go:89] found id: ""
	I1202 19:26:48.338044   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.338050   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:48.338055   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:48.338108   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:48.363655   46141 cri.go:89] found id: ""
	I1202 19:26:48.363668   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.363675   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:48.363680   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:48.363732   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:48.388544   46141 cri.go:89] found id: ""
	I1202 19:26:48.388565   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.388572   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:48.388577   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:48.388631   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:48.413919   46141 cri.go:89] found id: ""
	I1202 19:26:48.413932   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.413939   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:48.413962   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:48.414018   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:48.438768   46141 cri.go:89] found id: ""
	I1202 19:26:48.438782   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.438789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:48.438796   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:48.438806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:48.508480   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:48.508498   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:48.519336   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:48.519354   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:48.612485   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:48.612495   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:48.612505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:48.689541   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:48.689559   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.220741   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:51.230995   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:51.231052   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:51.257767   46141 cri.go:89] found id: ""
	I1202 19:26:51.257786   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.257794   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:51.257801   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:51.257856   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:51.282338   46141 cri.go:89] found id: ""
	I1202 19:26:51.282351   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.282358   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:51.282363   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:51.282425   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:51.311031   46141 cri.go:89] found id: ""
	I1202 19:26:51.311044   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.311051   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:51.311056   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:51.311111   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:51.339385   46141 cri.go:89] found id: ""
	I1202 19:26:51.339399   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.339405   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:51.339410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:51.339476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:51.368365   46141 cri.go:89] found id: ""
	I1202 19:26:51.368379   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.368386   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:51.368391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:51.368455   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:51.393598   46141 cri.go:89] found id: ""
	I1202 19:26:51.393611   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.393618   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:51.393623   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:51.393696   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:51.423516   46141 cri.go:89] found id: ""
	I1202 19:26:51.423529   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.423536   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:51.423543   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:51.423553   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:51.488010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:51.488020   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:51.488031   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:51.568503   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:51.568521   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.604611   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:51.604626   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:51.673166   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:51.673184   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:54.184676   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:54.194875   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:54.194933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:54.219830   46141 cri.go:89] found id: ""
	I1202 19:26:54.219850   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.219857   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:54.219863   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:54.219922   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:54.245201   46141 cri.go:89] found id: ""
	I1202 19:26:54.245214   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.245221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:54.245228   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:54.245295   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:54.270718   46141 cri.go:89] found id: ""
	I1202 19:26:54.270732   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.270739   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:54.270744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:54.270799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:54.295488   46141 cri.go:89] found id: ""
	I1202 19:26:54.295501   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.295508   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:54.295513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:54.295568   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:54.320597   46141 cri.go:89] found id: ""
	I1202 19:26:54.320610   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.320617   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:54.320622   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:54.320675   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:54.348002   46141 cri.go:89] found id: ""
	I1202 19:26:54.348017   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.348024   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:54.348029   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:54.348089   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:54.374189   46141 cri.go:89] found id: ""
	I1202 19:26:54.374203   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.374209   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:54.374217   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:54.374229   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:54.439569   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:54.439581   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:54.439594   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:54.524214   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:54.524233   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:54.564820   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:54.564841   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:54.639908   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:54.639928   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.151760   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:57.161952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:57.162007   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:57.186061   46141 cri.go:89] found id: ""
	I1202 19:26:57.186074   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.186081   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:57.186087   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:57.186144   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:57.211829   46141 cri.go:89] found id: ""
	I1202 19:26:57.211843   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.211850   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:57.211856   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:57.211914   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:57.237584   46141 cri.go:89] found id: ""
	I1202 19:26:57.237598   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.237605   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:57.237610   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:57.237697   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:57.266726   46141 cri.go:89] found id: ""
	I1202 19:26:57.266740   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.266746   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:57.266752   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:57.266810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:57.293971   46141 cri.go:89] found id: ""
	I1202 19:26:57.293984   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.293991   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:57.293996   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:57.294050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:57.322602   46141 cri.go:89] found id: ""
	I1202 19:26:57.322615   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.322622   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:57.322628   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:57.322685   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:57.347221   46141 cri.go:89] found id: ""
	I1202 19:26:57.347234   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.347249   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:57.347257   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:57.347267   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.358475   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:57.358490   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:57.420357   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:57.420367   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:57.420378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:57.498037   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:57.498057   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:57.530853   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:57.530870   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.105404   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:00.167692   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:00.167773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:00.310630   46141 cri.go:89] found id: ""
	I1202 19:27:00.310644   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.310652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:00.310659   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:00.310726   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:00.379652   46141 cri.go:89] found id: ""
	I1202 19:27:00.379665   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.379673   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:00.379678   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:00.379740   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:00.417470   46141 cri.go:89] found id: ""
	I1202 19:27:00.417487   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.417496   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:00.417501   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:00.417571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:00.459129   46141 cri.go:89] found id: ""
	I1202 19:27:00.459144   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.459151   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:00.459157   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:00.459225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:00.491958   46141 cri.go:89] found id: ""
	I1202 19:27:00.491973   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.491980   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:00.491986   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:00.492050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:00.522076   46141 cri.go:89] found id: ""
	I1202 19:27:00.522091   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.522098   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:00.522110   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:00.522185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:00.560640   46141 cri.go:89] found id: ""
	I1202 19:27:00.560654   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.560661   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:00.560668   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:00.560677   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:00.652444   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:00.652464   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:00.684426   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:00.684441   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.751419   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:00.751437   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:00.763771   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:00.763786   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:00.826022   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.326866   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:03.336590   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:03.336644   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:03.361031   46141 cri.go:89] found id: ""
	I1202 19:27:03.361045   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.361051   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:03.361057   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:03.361109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:03.385187   46141 cri.go:89] found id: ""
	I1202 19:27:03.385201   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.385208   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:03.385214   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:03.385268   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:03.410330   46141 cri.go:89] found id: ""
	I1202 19:27:03.410343   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.410350   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:03.410355   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:03.410412   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:03.435485   46141 cri.go:89] found id: ""
	I1202 19:27:03.435499   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.435505   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:03.435511   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:03.435565   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:03.460310   46141 cri.go:89] found id: ""
	I1202 19:27:03.460323   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.460330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:03.460335   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:03.460389   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:03.488041   46141 cri.go:89] found id: ""
	I1202 19:27:03.488054   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.488061   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:03.488066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:03.488120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:03.512748   46141 cri.go:89] found id: ""
	I1202 19:27:03.512761   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.512768   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:03.512776   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:03.512787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:03.523642   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:03.523658   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:03.617573   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.617591   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:03.617602   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:03.694365   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:03.694383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:03.726522   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:03.726537   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.302579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:06.312543   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:06.312604   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:06.337638   46141 cri.go:89] found id: ""
	I1202 19:27:06.337693   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.337700   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:06.337706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:06.337764   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:06.362621   46141 cri.go:89] found id: ""
	I1202 19:27:06.362634   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.362641   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:06.362646   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:06.362698   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:06.387105   46141 cri.go:89] found id: ""
	I1202 19:27:06.387121   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.387127   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:06.387133   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:06.387186   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:06.415681   46141 cri.go:89] found id: ""
	I1202 19:27:06.415694   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.415700   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:06.415706   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:06.415760   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:06.444254   46141 cri.go:89] found id: ""
	I1202 19:27:06.444267   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.444274   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:06.444279   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:06.444337   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:06.468778   46141 cri.go:89] found id: ""
	I1202 19:27:06.468791   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.468799   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:06.468805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:06.468859   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:06.493545   46141 cri.go:89] found id: ""
	I1202 19:27:06.493558   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.493564   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:06.493572   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:06.493583   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:06.567943   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:06.567953   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:06.567963   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:06.656325   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:06.656344   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:06.685907   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:06.685923   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.756875   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:06.756894   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.270257   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:09.280597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:09.280658   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:09.304838   46141 cri.go:89] found id: ""
	I1202 19:27:09.304856   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.304863   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:09.304872   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:09.304926   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:09.329409   46141 cri.go:89] found id: ""
	I1202 19:27:09.329422   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.329430   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:09.329435   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:09.329491   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:09.353934   46141 cri.go:89] found id: ""
	I1202 19:27:09.353948   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.353954   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:09.353960   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:09.354016   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:09.379084   46141 cri.go:89] found id: ""
	I1202 19:27:09.379098   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.379105   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:09.379111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:09.379166   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:09.404377   46141 cri.go:89] found id: ""
	I1202 19:27:09.404391   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.404398   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:09.404403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:09.404459   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:09.429248   46141 cri.go:89] found id: ""
	I1202 19:27:09.429262   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.429269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:09.429274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:09.429331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:09.453340   46141 cri.go:89] found id: ""
	I1202 19:27:09.453354   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.453360   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:09.453367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:09.453378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:09.519114   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:09.519131   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.530268   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:09.530282   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:09.622354   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:09.622364   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:09.622374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:09.698919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:09.698936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.231072   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:12.240732   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:12.240796   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:12.267547   46141 cri.go:89] found id: ""
	I1202 19:27:12.267560   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.267566   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:12.267572   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:12.267626   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:12.291129   46141 cri.go:89] found id: ""
	I1202 19:27:12.291143   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.291150   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:12.291155   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:12.291209   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:12.316228   46141 cri.go:89] found id: ""
	I1202 19:27:12.316242   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.316248   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:12.316253   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:12.316305   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:12.340306   46141 cri.go:89] found id: ""
	I1202 19:27:12.340319   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.340326   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:12.340331   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:12.340386   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:12.365210   46141 cri.go:89] found id: ""
	I1202 19:27:12.365224   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.365230   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:12.365239   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:12.365299   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:12.393299   46141 cri.go:89] found id: ""
	I1202 19:27:12.393312   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.393319   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:12.393327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:12.393387   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:12.418063   46141 cri.go:89] found id: ""
	I1202 19:27:12.418089   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.418096   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:12.418104   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:12.418114   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.450419   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:12.450434   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:12.520281   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:12.520300   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:12.531244   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:12.531260   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:12.614672   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:12.614681   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:12.614691   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.191935   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:15.202075   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:15.202136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:15.227991   46141 cri.go:89] found id: ""
	I1202 19:27:15.228004   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.228011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:15.228016   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:15.228073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:15.253837   46141 cri.go:89] found id: ""
	I1202 19:27:15.253850   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.253856   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:15.253861   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:15.253916   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:15.279658   46141 cri.go:89] found id: ""
	I1202 19:27:15.279671   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.279677   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:15.279682   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:15.279735   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:15.303415   46141 cri.go:89] found id: ""
	I1202 19:27:15.303429   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.303435   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:15.303440   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:15.303496   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:15.327738   46141 cri.go:89] found id: ""
	I1202 19:27:15.327752   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.327759   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:15.327764   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:15.327818   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:15.353097   46141 cri.go:89] found id: ""
	I1202 19:27:15.353110   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.353117   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:15.353122   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:15.353175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:15.377713   46141 cri.go:89] found id: ""
	I1202 19:27:15.377726   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.377734   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:15.377741   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:15.377751   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:15.443006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:15.443024   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:15.453500   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:15.453519   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:15.518415   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:15.518425   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:15.518438   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.596810   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:15.596828   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:18.130179   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:18.140204   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:18.140265   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:18.167800   46141 cri.go:89] found id: ""
	I1202 19:27:18.167814   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.167821   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:18.167826   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:18.167882   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:18.191990   46141 cri.go:89] found id: ""
	I1202 19:27:18.192003   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.192010   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:18.192015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:18.192072   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:18.216815   46141 cri.go:89] found id: ""
	I1202 19:27:18.216828   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.216835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:18.216840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:18.216894   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:18.240868   46141 cri.go:89] found id: ""
	I1202 19:27:18.240881   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.240888   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:18.240894   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:18.240950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:18.265457   46141 cri.go:89] found id: ""
	I1202 19:27:18.265470   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.265476   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:18.265482   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:18.265533   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:18.289248   46141 cri.go:89] found id: ""
	I1202 19:27:18.289262   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.289269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:18.289275   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:18.289339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:18.312672   46141 cri.go:89] found id: ""
	I1202 19:27:18.312685   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.312692   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:18.312700   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:18.312710   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:18.380764   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:18.380781   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:18.391485   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:18.391501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:18.453699   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:18.453709   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:18.453720   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:18.530116   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:18.530134   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.069567   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:21.079484   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:21.079550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:21.103488   46141 cri.go:89] found id: ""
	I1202 19:27:21.103503   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.103511   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:21.103517   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:21.103572   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:21.130794   46141 cri.go:89] found id: ""
	I1202 19:27:21.130807   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.130814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:21.130819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:21.130876   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:21.154925   46141 cri.go:89] found id: ""
	I1202 19:27:21.154940   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.154946   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:21.154952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:21.155008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:21.183874   46141 cri.go:89] found id: ""
	I1202 19:27:21.183887   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.183895   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:21.183900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:21.183956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:21.208723   46141 cri.go:89] found id: ""
	I1202 19:27:21.208736   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.208744   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:21.208750   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:21.208805   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:21.233965   46141 cri.go:89] found id: ""
	I1202 19:27:21.233978   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.233985   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:21.233990   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:21.234046   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:21.257686   46141 cri.go:89] found id: ""
	I1202 19:27:21.257699   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.257706   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:21.257714   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:21.257724   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:21.318236   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:21.318250   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:21.318261   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:21.395292   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:21.395310   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.422658   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:21.422674   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:21.489157   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:21.489174   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.001769   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:24.011691   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:24.011752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:24.042533   46141 cri.go:89] found id: ""
	I1202 19:27:24.042554   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.042561   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:24.042566   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:24.042624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:24.070666   46141 cri.go:89] found id: ""
	I1202 19:27:24.070679   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.070686   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:24.070691   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:24.070753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:24.095535   46141 cri.go:89] found id: ""
	I1202 19:27:24.095549   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.095556   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:24.095561   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:24.095619   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:24.123758   46141 cri.go:89] found id: ""
	I1202 19:27:24.123772   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.123779   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:24.123784   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:24.123838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:24.149095   46141 cri.go:89] found id: ""
	I1202 19:27:24.149108   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.149114   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:24.149120   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:24.149175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:24.174002   46141 cri.go:89] found id: ""
	I1202 19:27:24.174015   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.174022   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:24.174027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:24.174125   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:24.200105   46141 cri.go:89] found id: ""
	I1202 19:27:24.200119   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.200126   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:24.200133   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:24.200144   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:24.266202   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:24.266219   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.277238   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:24.277253   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:24.343395   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:24.343404   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:24.343414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:24.424919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:24.424936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:26.953925   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:26.963713   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:26.963769   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:26.988142   46141 cri.go:89] found id: ""
	I1202 19:27:26.988156   46141 logs.go:282] 0 containers: []
	W1202 19:27:26.988163   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:26.988168   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:26.988223   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:27.013673   46141 cri.go:89] found id: ""
	I1202 19:27:27.013687   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.013694   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:27.013699   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:27.013754   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:27.039371   46141 cri.go:89] found id: ""
	I1202 19:27:27.039384   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.039391   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:27.039396   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:27.039452   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:27.062786   46141 cri.go:89] found id: ""
	I1202 19:27:27.062800   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.062807   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:27.062812   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:27.062868   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:27.087058   46141 cri.go:89] found id: ""
	I1202 19:27:27.087072   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.087078   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:27.087083   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:27.087139   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:27.111397   46141 cri.go:89] found id: ""
	I1202 19:27:27.111410   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.111417   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:27.111422   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:27.111474   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:27.134753   46141 cri.go:89] found id: ""
	I1202 19:27:27.134774   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.134781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:27.134788   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:27.134798   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:27.200051   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:27.200069   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:27.210589   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:27.210603   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:27.274673   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:27.274684   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:27.274695   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:27.350589   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:27.350607   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:29.879009   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:29.888757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:29.888814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:29.914106   46141 cri.go:89] found id: ""
	I1202 19:27:29.914119   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.914126   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:29.914131   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:29.914198   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:29.945870   46141 cri.go:89] found id: ""
	I1202 19:27:29.945883   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.945890   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:29.945895   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:29.945951   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:29.972147   46141 cri.go:89] found id: ""
	I1202 19:27:29.972161   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.972168   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:29.972173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:29.972237   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:29.999569   46141 cri.go:89] found id: ""
	I1202 19:27:29.999583   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.999590   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:29.999595   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:29.999654   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:30.048258   46141 cri.go:89] found id: ""
	I1202 19:27:30.048273   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.048281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:30.048286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:30.048361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:30.083224   46141 cri.go:89] found id: ""
	I1202 19:27:30.083238   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.083245   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:30.083251   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:30.083308   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:30.113945   46141 cri.go:89] found id: ""
	I1202 19:27:30.113959   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.113966   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:30.113975   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:30.113986   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:30.192106   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:30.192125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:30.221887   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:30.221904   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:30.290188   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:30.290204   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:30.301167   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:30.301182   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:30.362881   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:32.863109   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:32.872876   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:32.872937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:32.897586   46141 cri.go:89] found id: ""
	I1202 19:27:32.897603   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.897610   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:32.897615   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:32.897706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:32.924245   46141 cri.go:89] found id: ""
	I1202 19:27:32.924258   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.924265   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:32.924270   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:32.924332   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:32.951911   46141 cri.go:89] found id: ""
	I1202 19:27:32.951925   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.951932   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:32.951938   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:32.951992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:32.975852   46141 cri.go:89] found id: ""
	I1202 19:27:32.975865   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.975872   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:32.975878   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:32.975933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:33.000511   46141 cri.go:89] found id: ""
	I1202 19:27:33.000525   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.000532   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:33.000537   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:33.000591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:33.030910   46141 cri.go:89] found id: ""
	I1202 19:27:33.030924   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.030931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:33.030936   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:33.030993   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:33.055909   46141 cri.go:89] found id: ""
	I1202 19:27:33.055922   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.055929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:33.055937   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:33.055947   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:33.121449   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:33.121471   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:33.134922   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:33.134955   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:33.198500   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:33.198512   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:33.198524   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:33.275340   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:33.275358   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:35.803184   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:35.814556   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:35.814622   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:35.843911   46141 cri.go:89] found id: ""
	I1202 19:27:35.843927   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.843934   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:35.843939   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:35.844010   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:35.872792   46141 cri.go:89] found id: ""
	I1202 19:27:35.872807   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.872814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:35.872819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:35.872885   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:35.899563   46141 cri.go:89] found id: ""
	I1202 19:27:35.899576   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.899583   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:35.899588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:35.899642   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:35.929110   46141 cri.go:89] found id: ""
	I1202 19:27:35.929133   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.929141   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:35.929147   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:35.929214   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:35.953603   46141 cri.go:89] found id: ""
	I1202 19:27:35.953617   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.953624   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:35.953629   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:35.953706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:35.978487   46141 cri.go:89] found id: ""
	I1202 19:27:35.978501   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.978508   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:35.978513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:35.978571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:36.002610   46141 cri.go:89] found id: ""
	I1202 19:27:36.002623   46141 logs.go:282] 0 containers: []
	W1202 19:27:36.002629   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:36.002636   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:36.002647   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:36.078660   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:36.078679   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:36.108572   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:36.108589   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:36.174842   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:36.174858   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:36.185725   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:36.185740   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:36.248843   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:38.749933   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:38.759902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:38.759959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:38.784371   46141 cri.go:89] found id: ""
	I1202 19:27:38.784384   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.784390   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:38.784396   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:38.784449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:38.813903   46141 cri.go:89] found id: ""
	I1202 19:27:38.813918   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.813925   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:38.813930   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:38.813986   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:38.847704   46141 cri.go:89] found id: ""
	I1202 19:27:38.847718   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.847724   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:38.847730   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:38.847786   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:38.874126   46141 cri.go:89] found id: ""
	I1202 19:27:38.874139   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.874146   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:38.874151   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:38.874204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:38.899808   46141 cri.go:89] found id: ""
	I1202 19:27:38.899822   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.899829   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:38.899835   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:38.899890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:38.924777   46141 cri.go:89] found id: ""
	I1202 19:27:38.924791   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.924798   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:38.924804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:38.924898   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:38.949761   46141 cri.go:89] found id: ""
	I1202 19:27:38.949774   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.949781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:38.949788   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:38.949802   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:39.008770   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:39.008780   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:39.008794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:39.090107   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:39.090125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:39.122398   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:39.122414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:39.187817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:39.187833   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.698611   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:41.708767   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:41.708837   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:41.733990   46141 cri.go:89] found id: ""
	I1202 19:27:41.734004   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.734011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:41.734017   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:41.734080   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:41.759279   46141 cri.go:89] found id: ""
	I1202 19:27:41.759293   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.759299   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:41.759305   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:41.759359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:41.793259   46141 cri.go:89] found id: ""
	I1202 19:27:41.793272   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.793278   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:41.793284   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:41.793339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:41.821458   46141 cri.go:89] found id: ""
	I1202 19:27:41.821471   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.821484   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:41.821489   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:41.821545   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:41.849637   46141 cri.go:89] found id: ""
	I1202 19:27:41.849670   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.849678   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:41.849683   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:41.849743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:41.881100   46141 cri.go:89] found id: ""
	I1202 19:27:41.881113   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.881121   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:41.881127   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:41.881189   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:41.906054   46141 cri.go:89] found id: ""
	I1202 19:27:41.906067   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.906074   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:41.906082   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:41.906092   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.916746   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:41.916761   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:41.979747   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:41.979757   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:41.979767   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:42.054766   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:42.054787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:42.086163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:42.086187   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.697773   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:44.707597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:44.707659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:44.733158   46141 cri.go:89] found id: ""
	I1202 19:27:44.733184   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.733191   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:44.733196   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:44.733261   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:44.757757   46141 cri.go:89] found id: ""
	I1202 19:27:44.757771   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.757778   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:44.757784   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:44.757843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:44.783874   46141 cri.go:89] found id: ""
	I1202 19:27:44.783888   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.783897   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:44.783902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:44.783959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:44.816248   46141 cri.go:89] found id: ""
	I1202 19:27:44.816261   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.816268   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:44.816273   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:44.816327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:44.847419   46141 cri.go:89] found id: ""
	I1202 19:27:44.847433   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.847440   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:44.847445   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:44.847504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:44.873837   46141 cri.go:89] found id: ""
	I1202 19:27:44.873851   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.873858   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:44.873863   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:44.873918   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:44.897843   46141 cri.go:89] found id: ""
	I1202 19:27:44.897856   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.897863   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:44.897871   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:44.897881   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.966499   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:44.966516   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:44.978644   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:44.978659   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:45.054728   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:45.054738   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:45.054765   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:45.162639   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:45.162660   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.718000   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:47.727890   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:47.727953   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:47.752168   46141 cri.go:89] found id: ""
	I1202 19:27:47.752181   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.752188   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:47.752193   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:47.752253   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:47.776058   46141 cri.go:89] found id: ""
	I1202 19:27:47.776071   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.776078   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:47.776086   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:47.776143   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:47.809050   46141 cri.go:89] found id: ""
	I1202 19:27:47.809065   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.809072   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:47.809078   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:47.809142   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:47.851196   46141 cri.go:89] found id: ""
	I1202 19:27:47.851209   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.851222   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:47.851227   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:47.851285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:47.877019   46141 cri.go:89] found id: ""
	I1202 19:27:47.877033   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.877039   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:47.877045   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:47.877104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:47.906595   46141 cri.go:89] found id: ""
	I1202 19:27:47.906609   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.906616   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:47.906621   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:47.906684   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:47.931137   46141 cri.go:89] found id: ""
	I1202 19:27:47.931150   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.931157   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:47.931165   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:47.931175   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.960778   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:47.960794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:48.026698   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:48.026716   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:48.039024   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:48.039040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:48.104995   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:48.105014   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:48.105026   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:50.681972   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:50.691952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:50.692008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:50.716419   46141 cri.go:89] found id: ""
	I1202 19:27:50.716432   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.716438   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:50.716443   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:50.716497   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:50.743698   46141 cri.go:89] found id: ""
	I1202 19:27:50.743712   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.743718   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:50.743723   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:50.743778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:50.768264   46141 cri.go:89] found id: ""
	I1202 19:27:50.768277   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.768283   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:50.768297   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:50.768354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:50.794403   46141 cri.go:89] found id: ""
	I1202 19:27:50.794428   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.794436   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:50.794441   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:50.794504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:50.820731   46141 cri.go:89] found id: ""
	I1202 19:27:50.820745   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.820752   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:50.820757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:50.820812   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:50.852081   46141 cri.go:89] found id: ""
	I1202 19:27:50.852094   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.852101   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:50.852106   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:50.852172   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:50.879611   46141 cri.go:89] found id: ""
	I1202 19:27:50.879625   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.879631   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:50.879644   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:50.879654   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:50.906936   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:50.906951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:50.975206   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:50.975223   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:50.985872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:50.985895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:51.052846   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:51.052855   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:51.052866   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:53.628857   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:53.638710   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:53.638773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:53.662581   46141 cri.go:89] found id: ""
	I1202 19:27:53.662595   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.662602   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:53.662607   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:53.662660   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:53.687222   46141 cri.go:89] found id: ""
	I1202 19:27:53.687237   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.687244   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:53.687249   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:53.687306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:53.711983   46141 cri.go:89] found id: ""
	I1202 19:27:53.711996   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.712003   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:53.712009   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:53.712065   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:53.737377   46141 cri.go:89] found id: ""
	I1202 19:27:53.737391   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.737398   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:53.737403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:53.737456   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:53.765301   46141 cri.go:89] found id: ""
	I1202 19:27:53.765315   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.765321   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:53.765327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:53.765383   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:53.793518   46141 cri.go:89] found id: ""
	I1202 19:27:53.793531   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.793537   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:53.793542   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:53.793597   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:53.822849   46141 cri.go:89] found id: ""
	I1202 19:27:53.822863   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.822870   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:53.822877   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:53.822887   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:53.854992   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:53.855010   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:53.921075   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:53.921094   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:53.931936   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:53.931951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:53.995407   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:53.995422   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:53.995432   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.577211   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:56.588419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:56.588476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:56.617070   46141 cri.go:89] found id: ""
	I1202 19:27:56.617083   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.617090   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:56.617096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:56.617149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:56.644965   46141 cri.go:89] found id: ""
	I1202 19:27:56.644979   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.644986   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:56.644990   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:56.645050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:56.673885   46141 cri.go:89] found id: ""
	I1202 19:27:56.673899   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.673906   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:56.673911   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:56.673965   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:56.698577   46141 cri.go:89] found id: ""
	I1202 19:27:56.698590   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.698597   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:56.698603   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:56.698659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:56.727980   46141 cri.go:89] found id: ""
	I1202 19:27:56.727995   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.728001   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:56.728007   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:56.728061   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:56.752295   46141 cri.go:89] found id: ""
	I1202 19:27:56.752309   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.752316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:56.752321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:56.752378   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:56.777216   46141 cri.go:89] found id: ""
	I1202 19:27:56.777228   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.777236   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:56.777243   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:56.777254   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:56.788028   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:56.788043   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:56.868442   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:56.868452   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:56.868462   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.944462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:56.944480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:56.979950   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:56.979964   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:59.548516   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:59.558289   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:59.558346   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:59.581971   46141 cri.go:89] found id: ""
	I1202 19:27:59.581984   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.581991   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:59.581997   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:59.582054   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:59.606472   46141 cri.go:89] found id: ""
	I1202 19:27:59.606485   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.606492   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:59.606497   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:59.606551   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:59.631964   46141 cri.go:89] found id: ""
	I1202 19:27:59.631977   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.631984   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:59.631989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:59.632042   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:59.657151   46141 cri.go:89] found id: ""
	I1202 19:27:59.657164   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.657171   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:59.657177   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:59.657232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:59.683812   46141 cri.go:89] found id: ""
	I1202 19:27:59.683826   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.683834   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:59.683840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:59.683901   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:59.712800   46141 cri.go:89] found id: ""
	I1202 19:27:59.712814   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.712821   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:59.712826   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:59.712900   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:59.745829   46141 cri.go:89] found id: ""
	I1202 19:27:59.745842   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.745849   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:59.745856   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:59.745868   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:59.817077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:59.817087   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:59.817097   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:59.907455   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:59.907474   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:59.935466   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:59.935480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:00.005487   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:00.005511   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.519937   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:02.529900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:02.529967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:02.555080   46141 cri.go:89] found id: ""
	I1202 19:28:02.555093   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.555099   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:02.555105   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:02.555160   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:02.579988   46141 cri.go:89] found id: ""
	I1202 19:28:02.580002   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.580009   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:02.580015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:02.580069   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:02.604847   46141 cri.go:89] found id: ""
	I1202 19:28:02.604861   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.604868   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:02.604874   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:02.604937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:02.629805   46141 cri.go:89] found id: ""
	I1202 19:28:02.629818   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.629825   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:02.629832   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:02.629888   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:02.654310   46141 cri.go:89] found id: ""
	I1202 19:28:02.654324   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.654330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:02.654336   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:02.654393   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:02.683226   46141 cri.go:89] found id: ""
	I1202 19:28:02.683239   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.683246   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:02.683252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:02.683306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:02.707703   46141 cri.go:89] found id: ""
	I1202 19:28:02.707717   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.707724   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:02.707732   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:02.707741   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:02.783085   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:02.783103   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:02.829513   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:02.829528   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:02.903215   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:02.903231   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.914284   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:02.914302   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:02.974963   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.475826   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:05.485953   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:05.486009   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:05.512427   46141 cri.go:89] found id: ""
	I1202 19:28:05.512440   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.512447   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:05.512453   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:05.512509   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:05.536678   46141 cri.go:89] found id: ""
	I1202 19:28:05.536691   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.536698   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:05.536703   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:05.536757   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:05.561732   46141 cri.go:89] found id: ""
	I1202 19:28:05.561745   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.561752   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:05.561757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:05.561810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:05.585989   46141 cri.go:89] found id: ""
	I1202 19:28:05.586003   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.586010   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:05.586015   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:05.586073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:05.611860   46141 cri.go:89] found id: ""
	I1202 19:28:05.611891   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.611899   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:05.611904   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:05.611969   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:05.637502   46141 cri.go:89] found id: ""
	I1202 19:28:05.637516   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.637523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:05.637528   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:05.637583   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:05.662486   46141 cri.go:89] found id: ""
	I1202 19:28:05.662499   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.662506   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:05.662514   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:05.662525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:05.727597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:05.727615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:05.738294   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:05.738309   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:05.810066   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.810076   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:05.810088   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:05.892482   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:05.892506   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:08.423125   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:08.433033   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:08.433090   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:08.458175   46141 cri.go:89] found id: ""
	I1202 19:28:08.458189   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.458195   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:08.458201   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:08.458257   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:08.483893   46141 cri.go:89] found id: ""
	I1202 19:28:08.483906   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.483913   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:08.483918   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:08.483974   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:08.507923   46141 cri.go:89] found id: ""
	I1202 19:28:08.507937   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.507953   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:08.507964   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:08.508081   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:08.537015   46141 cri.go:89] found id: ""
	I1202 19:28:08.537030   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.537041   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:08.537046   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:08.537102   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:08.562386   46141 cri.go:89] found id: ""
	I1202 19:28:08.562399   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.562405   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:08.562410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:08.562464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:08.589367   46141 cri.go:89] found id: ""
	I1202 19:28:08.589380   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.589387   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:08.589392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:08.589446   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:08.614763   46141 cri.go:89] found id: ""
	I1202 19:28:08.614776   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.614782   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:08.614790   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:08.614806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:08.680003   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:08.680020   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:08.691092   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:08.691108   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:08.758435   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:08.758444   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:08.758455   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:08.838206   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:08.838225   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.377402   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:11.387381   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:11.387443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:11.416000   46141 cri.go:89] found id: ""
	I1202 19:28:11.416013   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.416020   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:11.416025   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:11.416086   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:11.440887   46141 cri.go:89] found id: ""
	I1202 19:28:11.440900   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.440907   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:11.440913   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:11.440980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:11.469507   46141 cri.go:89] found id: ""
	I1202 19:28:11.469520   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.469527   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:11.469533   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:11.469589   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:11.494304   46141 cri.go:89] found id: ""
	I1202 19:28:11.494324   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.494331   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:11.494337   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:11.494395   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:11.519823   46141 cri.go:89] found id: ""
	I1202 19:28:11.519836   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.519843   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:11.519848   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:11.519905   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:11.544959   46141 cri.go:89] found id: ""
	I1202 19:28:11.544972   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.544980   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:11.544985   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:11.545043   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:11.569409   46141 cri.go:89] found id: ""
	I1202 19:28:11.569422   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.569429   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:11.569437   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:11.569449   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.605867   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:11.605883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:11.672817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:11.672835   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:11.683920   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:11.683937   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:11.748483   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:11.748494   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:11.748505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:14.328100   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:14.338319   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:14.338385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:14.368273   46141 cri.go:89] found id: ""
	I1202 19:28:14.368287   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.368293   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:14.368299   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:14.368353   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:14.393695   46141 cri.go:89] found id: ""
	I1202 19:28:14.393708   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.393715   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:14.393720   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:14.393778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:14.419532   46141 cri.go:89] found id: ""
	I1202 19:28:14.419546   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.419552   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:14.419558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:14.419611   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:14.444792   46141 cri.go:89] found id: ""
	I1202 19:28:14.444806   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.444812   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:14.444818   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:14.444874   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:14.473002   46141 cri.go:89] found id: ""
	I1202 19:28:14.473015   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.473022   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:14.473027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:14.473082   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:14.500557   46141 cri.go:89] found id: ""
	I1202 19:28:14.500570   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.500577   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:14.500583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:14.500639   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:14.531570   46141 cri.go:89] found id: ""
	I1202 19:28:14.531583   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.531591   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:14.531598   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:14.531608   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:14.563367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:14.563385   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:14.629330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:14.629348   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:14.640467   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:14.640482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:14.703192   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:14.703201   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:14.703212   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.280934   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:17.290754   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:17.290816   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:17.315632   46141 cri.go:89] found id: ""
	I1202 19:28:17.315645   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.315652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:17.315657   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:17.315715   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:17.339240   46141 cri.go:89] found id: ""
	I1202 19:28:17.339256   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.339281   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:17.339304   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:17.339361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:17.362387   46141 cri.go:89] found id: ""
	I1202 19:28:17.362401   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.362408   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:17.362415   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:17.362471   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:17.388183   46141 cri.go:89] found id: ""
	I1202 19:28:17.388197   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.388204   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:17.388209   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:17.388264   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:17.417561   46141 cri.go:89] found id: ""
	I1202 19:28:17.417575   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.417582   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:17.417588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:17.417643   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:17.441561   46141 cri.go:89] found id: ""
	I1202 19:28:17.441574   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.441581   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:17.441596   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:17.441678   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:17.467464   46141 cri.go:89] found id: ""
	I1202 19:28:17.467477   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.467483   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:17.467491   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:17.467501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.543368   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:17.543386   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:17.574792   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:17.574807   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:17.641345   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:17.641363   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:17.651872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:17.651892   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:17.719233   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.219437   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:20.229376   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:20.229437   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:20.254960   46141 cri.go:89] found id: ""
	I1202 19:28:20.254973   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.254980   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:20.254985   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:20.255048   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:20.280663   46141 cri.go:89] found id: ""
	I1202 19:28:20.280676   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.280683   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:20.280688   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:20.280744   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:20.309275   46141 cri.go:89] found id: ""
	I1202 19:28:20.309288   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.309295   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:20.309300   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:20.309354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:20.334255   46141 cri.go:89] found id: ""
	I1202 19:28:20.334268   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.334275   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:20.334281   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:20.334334   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:20.359290   46141 cri.go:89] found id: ""
	I1202 19:28:20.359303   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.359310   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:20.359330   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:20.359385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:20.387906   46141 cri.go:89] found id: ""
	I1202 19:28:20.387919   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.387931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:20.387937   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:20.387995   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:20.412377   46141 cri.go:89] found id: ""
	I1202 19:28:20.412391   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.412398   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:20.412406   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:20.412421   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:20.478975   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:20.478994   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:20.491271   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:20.491286   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:20.559186   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.559197   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:20.559208   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:20.635117   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:20.635135   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:23.163845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:23.174025   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:23.174084   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:23.198952   46141 cri.go:89] found id: ""
	I1202 19:28:23.198965   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.198972   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:23.198977   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:23.199040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:23.227109   46141 cri.go:89] found id: ""
	I1202 19:28:23.227122   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.227128   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:23.227133   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:23.227194   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:23.252085   46141 cri.go:89] found id: ""
	I1202 19:28:23.252099   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.252106   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:23.252111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:23.252178   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:23.282041   46141 cri.go:89] found id: ""
	I1202 19:28:23.282054   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.282061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:23.282066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:23.282120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:23.306149   46141 cri.go:89] found id: ""
	I1202 19:28:23.306163   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.306170   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:23.306176   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:23.306231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:23.330130   46141 cri.go:89] found id: ""
	I1202 19:28:23.330143   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.330158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:23.330165   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:23.330232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:23.354289   46141 cri.go:89] found id: ""
	I1202 19:28:23.354303   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.354309   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:23.354317   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:23.354327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:23.421463   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:23.421481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:23.432425   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:23.432442   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:23.499162   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:23.499185   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:23.499198   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:23.574769   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:23.574787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.102251   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:26.112999   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:26.113059   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:26.139511   46141 cri.go:89] found id: ""
	I1202 19:28:26.139527   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.139534   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:26.139539   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:26.139595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:26.163810   46141 cri.go:89] found id: ""
	I1202 19:28:26.163823   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.163830   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:26.163845   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:26.163903   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:26.195678   46141 cri.go:89] found id: ""
	I1202 19:28:26.195691   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.195716   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:26.195721   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:26.195784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:26.221498   46141 cri.go:89] found id: ""
	I1202 19:28:26.221512   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.221519   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:26.221524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:26.221591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:26.246377   46141 cri.go:89] found id: ""
	I1202 19:28:26.246391   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.246397   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:26.246402   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:26.246464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:26.270652   46141 cri.go:89] found id: ""
	I1202 19:28:26.270665   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.270673   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:26.270678   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:26.270763   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:26.296694   46141 cri.go:89] found id: ""
	I1202 19:28:26.296707   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.296714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:26.296722   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:26.296735   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:26.371620   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:26.371631   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:26.371641   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:26.451711   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:26.451734   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.483175   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:26.483191   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:26.549681   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:26.549701   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.061808   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:29.072772   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:29.072827   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:29.101985   46141 cri.go:89] found id: ""
	I1202 19:28:29.101999   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.102006   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:29.102013   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:29.102074   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:29.128784   46141 cri.go:89] found id: ""
	I1202 19:28:29.128797   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.128803   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:29.128808   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:29.128862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:29.156726   46141 cri.go:89] found id: ""
	I1202 19:28:29.156740   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.156747   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:29.156753   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:29.156810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:29.186146   46141 cri.go:89] found id: ""
	I1202 19:28:29.186159   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.186167   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:29.186173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:29.186230   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:29.210367   46141 cri.go:89] found id: ""
	I1202 19:28:29.210381   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.210387   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:29.210392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:29.210448   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:29.234607   46141 cri.go:89] found id: ""
	I1202 19:28:29.234620   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.234635   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:29.234641   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:29.234695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:29.260124   46141 cri.go:89] found id: ""
	I1202 19:28:29.260137   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.260144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:29.260151   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:29.260161   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.270869   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:29.270885   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:29.335425   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:29.335435   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:29.335448   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:29.416026   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:29.416053   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:29.444738   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:29.444757   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:32.015450   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:32.028692   46141 kubeadm.go:602] duration metric: took 4m2.303606504s to restartPrimaryControlPlane
	W1202 19:28:32.028752   46141 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 19:28:32.028882   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:28:32.448460   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:28:32.461105   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:28:32.468953   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:28:32.469018   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:28:32.476620   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:28:32.476629   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:28:32.476680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:28:32.484342   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:28:32.484396   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:28:32.491816   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:28:32.499468   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:28:32.499526   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:28:32.506680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.513998   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:28:32.514056   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.521915   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:28:32.529746   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:28:32.529813   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:28:32.537427   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:28:32.575514   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:28:32.575563   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:28:32.649801   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:28:32.649866   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:28:32.649900   46141 kubeadm.go:319] OS: Linux
	I1202 19:28:32.649943   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:28:32.649990   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:28:32.650036   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:28:32.650083   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:28:32.650129   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:28:32.650176   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:28:32.650220   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:28:32.650266   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:28:32.650311   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:28:32.711361   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:28:32.711478   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:28:32.711574   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:28:32.719716   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:28:32.725408   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:28:32.725506   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:28:32.725580   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:28:32.725675   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:28:32.725741   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:28:32.725818   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:28:32.725877   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:28:32.725939   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:28:32.726006   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:28:32.726085   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:28:32.726169   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:28:32.726206   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:28:32.726266   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:28:32.962990   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:28:33.139589   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:28:33.816592   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:28:34.040085   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:28:34.279545   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:28:34.280074   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:28:34.282763   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:28:34.285708   46141 out.go:252]   - Booting up control plane ...
	I1202 19:28:34.285809   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:28:34.285891   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:28:34.288012   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:28:34.303407   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:28:34.303530   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:28:34.311292   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:28:34.311561   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:28:34.311687   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:28:34.441389   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:28:34.442903   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:32:34.442631   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443729s
	I1202 19:32:34.442655   46141 kubeadm.go:319] 
	I1202 19:32:34.442716   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:32:34.442751   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:32:34.442868   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:32:34.442876   46141 kubeadm.go:319] 
	I1202 19:32:34.443019   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:32:34.443050   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:32:34.443105   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:32:34.443119   46141 kubeadm.go:319] 
	I1202 19:32:34.446600   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:32:34.447010   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:32:34.447116   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:32:34.447358   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:32:34.447364   46141 kubeadm.go:319] 
	I1202 19:32:34.447431   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 19:32:34.447530   46141 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443729s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 19:32:34.447615   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:32:34.857158   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:32:34.869767   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:32:34.869822   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:32:34.877453   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:32:34.877463   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:32:34.877520   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:32:34.885001   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:32:34.885057   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:32:34.892315   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:32:34.899801   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:32:34.899854   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:32:34.907104   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.914843   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:32:34.914905   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.922357   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:32:34.930005   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:32:34.930062   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:32:34.937883   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:32:34.977710   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:32:34.977941   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:32:35.052803   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:32:35.052872   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:32:35.052916   46141 kubeadm.go:319] OS: Linux
	I1202 19:32:35.052967   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:32:35.053025   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:32:35.053081   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:32:35.053132   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:32:35.053189   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:32:35.053247   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:32:35.053296   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:32:35.053361   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:32:35.053405   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:32:35.129057   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:32:35.129160   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:32:35.129249   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:32:35.136437   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:32:35.141766   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:32:35.141858   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:32:35.141951   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:32:35.142045   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:32:35.142120   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:32:35.142195   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:32:35.142254   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:32:35.142330   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:32:35.142391   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:32:35.142465   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:32:35.142537   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:32:35.142573   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:32:35.142628   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:32:35.719108   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:32:35.855328   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:32:36.315829   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:32:36.611755   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:32:36.762758   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:32:36.763311   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:32:36.766390   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:32:36.769564   46141 out.go:252]   - Booting up control plane ...
	I1202 19:32:36.769677   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:32:36.769754   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:32:36.771251   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:32:36.785826   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:32:36.785928   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:32:36.793103   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:32:36.793426   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:32:36.793594   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:32:36.913663   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:32:36.913775   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:36:36.914797   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001215513s
	I1202 19:36:36.914820   46141 kubeadm.go:319] 
	I1202 19:36:36.914918   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:36:36.915114   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:36:36.915295   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:36:36.915303   46141 kubeadm.go:319] 
	I1202 19:36:36.915482   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:36:36.915772   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:36:36.915825   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:36:36.915828   46141 kubeadm.go:319] 
	I1202 19:36:36.923850   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:36:36.924318   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:36:36.924432   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:36:36.924695   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:36:36.924703   46141 kubeadm.go:319] 
	I1202 19:36:36.924833   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 19:36:36.924858   46141 kubeadm.go:403] duration metric: took 12m7.236978439s to StartCluster
	I1202 19:36:36.924902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:36:36.924959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:36:36.952746   46141 cri.go:89] found id: ""
	I1202 19:36:36.952760   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.952767   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:36:36.952772   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:36:36.952828   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:36:36.977200   46141 cri.go:89] found id: ""
	I1202 19:36:36.977214   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.977221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:36:36.977226   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:36:36.977291   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:36:37.002232   46141 cri.go:89] found id: ""
	I1202 19:36:37.002246   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.002253   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:36:37.002258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:36:37.002321   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:36:37.051601   46141 cri.go:89] found id: ""
	I1202 19:36:37.051615   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.051621   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:36:37.051626   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:36:37.051681   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:36:37.102950   46141 cri.go:89] found id: ""
	I1202 19:36:37.102976   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.102983   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:36:37.102988   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:36:37.103051   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:36:37.131342   46141 cri.go:89] found id: ""
	I1202 19:36:37.131355   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.131362   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:36:37.131368   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:36:37.131423   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:36:37.159192   46141 cri.go:89] found id: ""
	I1202 19:36:37.159206   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.159213   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:36:37.159221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:36:37.159234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:36:37.170095   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:36:37.170110   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:36:37.234222   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:36:37.234232   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:36:37.234242   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:36:37.306216   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:36:37.306234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:36:37.334163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:36:37.334178   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 19:36:37.399997   46141 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 19:36:37.400040   46141 out.go:285] * 
	W1202 19:36:37.400110   46141 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.400129   46141 out.go:285] * 
	W1202 19:36:37.402271   46141 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:36:37.407816   46141 out.go:203] 
	W1202 19:36:37.411562   46141 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.411641   46141 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 19:36:37.411664   46141 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 19:36:37.415811   46141 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546654939Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546834414Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546950457Z" level=info msg="Create NRI interface"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.5471107Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547130474Z" level=info msg="runtime interface created"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.54714466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547151634Z" level=info msg="runtime interface starting up..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547157616Z" level=info msg="starting plugins..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547170686Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547251727Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:24:28 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.715009926Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bc19958f-d803-4cd2-a545-4f6c118c1f40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716039792Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=97921bbe-b2e3-494c-be19-702e5072b6db name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716591601Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=702ce713-4736-4f82-bd4c-9fc9629fcb4d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717128034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5900f7cc-9a33-4e7a-8a73-829e63e64047 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717627973Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0a3735ac-393a-45fe-a0d5-34b181ae2dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718273997Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4854b9da-7f98-4e1b-9a6a-97fc85aeb622 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718754056Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f046502e-805f-4087-97ee-276ea86f9117 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.132448562Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bfb0729f-fcf5-4cf1-8661-79e44060815d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133109196Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=59868b2f-ef1f-42db-9580-1c52177e5173 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133599056Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0dadf3fc-12a7-405c-8560-5fb835ac24e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134131974Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f3eedcce-a194-4413-8ad5-a61c4ca64183 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134584067Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0d9672a7-dea9-4cd7-b618-4662ee6fbedc name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135094472Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=61806fbf-e06a-40e0-ab81-3632b0f3ac8c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135559257Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e966dc55-aa48-4909-b2a5-1769d8bd5c4c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:38.611709   21811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:38.612344   21811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:38.614035   21811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:38.614620   21811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:38.615962   21811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:36:38 up  1:18,  0 user,  load average: 0.15, 0.20, 0.28
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:36:36 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 02 19:36:37 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:37 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:37 functional-374330 kubelet[21655]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:37 functional-374330 kubelet[21655]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:37 functional-374330 kubelet[21655]: E1202 19:36:37.099746   21655 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 02 19:36:37 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:37 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:37 functional-374330 kubelet[21726]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:37 functional-374330 kubelet[21726]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:37 functional-374330 kubelet[21726]: E1202 19:36:37.854034   21726 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:37 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:38 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 02 19:36:38 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:38 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:38 functional-374330 kubelet[21810]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:38 functional-374330 kubelet[21810]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:38 functional-374330 kubelet[21810]: E1202 19:36:38.603844   21810 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:38 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:38 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (414.730822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-374330 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-374330 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.782691ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-374330 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (302.645305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 logs -n 25: (1.032159665s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-535807 image ls --format json --alsologtostderr                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr                                            │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls --format table --alsologtostderr                                                                                       │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ update-context │ functional-535807 update-context --alsologtostderr -v=2                                                                                           │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ image          │ functional-535807 image ls                                                                                                                        │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ delete         │ -p functional-535807                                                                                                                              │ functional-535807 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │ 02 Dec 25 19:09 UTC │
	│ start          │ -p functional-374330 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:09 UTC │                     │
	│ start          │ -p functional-374330 --alsologtostderr -v=8                                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:18 UTC │                     │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add registry.k8s.io/pause:latest                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache add minikube-local-cache-test:functional-374330                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ functional-374330 cache delete minikube-local-cache-test:functional-374330                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl images                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ cache          │ functional-374330 cache reload                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh            │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ kubectl        │ functional-374330 kubectl -- --context functional-374330 get pods                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ start          │ -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:24:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:24:25.235145   46141 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:24:25.235262   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235266   46141 out.go:374] Setting ErrFile to fd 2...
	I1202 19:24:25.235270   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235501   46141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:24:25.235832   46141 out.go:368] Setting JSON to false
	I1202 19:24:25.236657   46141 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4004,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:24:25.236712   46141 start.go:143] virtualization:  
	I1202 19:24:25.240137   46141 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:24:25.243026   46141 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:24:25.243116   46141 notify.go:221] Checking for updates...
	I1202 19:24:25.249453   46141 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:24:25.252235   46141 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:24:25.255042   46141 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:24:25.257985   46141 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:24:25.260839   46141 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:24:25.264178   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:25.264323   46141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:24:25.284942   46141 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:24:25.285038   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.377890   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.369067605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.377983   46141 docker.go:319] overlay module found
	I1202 19:24:25.380979   46141 out.go:179] * Using the docker driver based on existing profile
	I1202 19:24:25.383947   46141 start.go:309] selected driver: docker
	I1202 19:24:25.383955   46141 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.384041   46141 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:24:25.384143   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.448724   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.440009169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.449135   46141 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:24:25.449156   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:25.449204   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:25.449250   46141 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.452291   46141 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:24:25.455020   46141 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:24:25.457907   46141 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:24:25.460700   46141 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:24:25.460741   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:25.479854   46141 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:24:25.479865   46141 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:24:25.525268   46141 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:24:25.722344   46141 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:24:25.722516   46141 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:24:25.722575   46141 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722662   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:24:25.722674   46141 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.293µs
	I1202 19:24:25.722687   46141 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:24:25.722699   46141 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722728   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:24:25.722732   46141 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 34.97µs
	I1202 19:24:25.722737   46141 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722755   46141 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722765   46141 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:24:25.722787   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:24:25.722792   46141 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722800   46141 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.388µs
	I1202 19:24:25.722806   46141 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722816   46141 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722833   46141 start.go:364] duration metric: took 28.102µs to acquireMachinesLock for "functional-374330"
	I1202 19:24:25.722844   46141 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:24:25.722848   46141 fix.go:54] fixHost starting: 
	I1202 19:24:25.722868   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:24:25.722874   46141 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 59.51µs
	I1202 19:24:25.722879   46141 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722888   46141 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722914   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:24:25.722918   46141 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.859µs
	I1202 19:24:25.722926   46141 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722934   46141 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722961   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:24:25.722965   46141 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.041µs
	I1202 19:24:25.722969   46141 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:24:25.722984   46141 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723013   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:24:25.723018   46141 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.477µs
	I1202 19:24:25.723022   46141 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:24:25.723030   46141 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723054   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:24:25.723058   46141 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.956µs
	I1202 19:24:25.723062   46141 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:24:25.723069   46141 cache.go:87] Successfully saved all images to host disk.
	I1202 19:24:25.723135   46141 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:24:25.740024   46141 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:24:25.740043   46141 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:24:25.743422   46141 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:24:25.743444   46141 machine.go:94] provisionDockerMachine start ...
	I1202 19:24:25.743520   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.759952   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.760267   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.760274   46141 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:24:25.913242   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:25.913255   46141 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:24:25.913315   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.930816   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.931108   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.931116   46141 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:24:26.092717   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:26.092791   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.112703   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.112993   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.113006   46141 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:24:26.261761   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:24:26.261776   46141 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:24:26.261797   46141 ubuntu.go:190] setting up certificates
	I1202 19:24:26.261807   46141 provision.go:84] configureAuth start
	I1202 19:24:26.261862   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:26.279208   46141 provision.go:143] copyHostCerts
	I1202 19:24:26.279270   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:24:26.279282   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:24:26.279355   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:24:26.279450   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:24:26.279454   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:24:26.279478   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:24:26.279560   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:24:26.279563   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:24:26.279586   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:24:26.279633   46141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:24:26.509539   46141 provision.go:177] copyRemoteCerts
	I1202 19:24:26.509599   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:24:26.509644   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.526423   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:26.629290   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:24:26.645497   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:24:26.662152   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:24:26.678745   46141 provision.go:87] duration metric: took 416.916855ms to configureAuth
	I1202 19:24:26.678762   46141 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:24:26.678944   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:26.679035   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.696214   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.696565   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.696576   46141 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:24:27.030556   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:24:27.030570   46141 machine.go:97] duration metric: took 1.287120124s to provisionDockerMachine
	I1202 19:24:27.030580   46141 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:24:27.030591   46141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:24:27.030695   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:24:27.030734   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.047988   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.153876   46141 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:24:27.157492   46141 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:24:27.157509   46141 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:24:27.157519   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:24:27.157573   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:24:27.157644   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:24:27.157766   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:24:27.157814   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:24:27.165310   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:27.182588   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:24:27.199652   46141 start.go:296] duration metric: took 169.058439ms for postStartSetup
	I1202 19:24:27.199721   46141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:24:27.199772   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.216431   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.322237   46141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:24:27.326538   46141 fix.go:56] duration metric: took 1.603683597s for fixHost
	I1202 19:24:27.326551   46141 start.go:83] releasing machines lock for "functional-374330", held for 1.603712807s
	I1202 19:24:27.326613   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:27.342449   46141 ssh_runner.go:195] Run: cat /version.json
	I1202 19:24:27.342488   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.342715   46141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:24:27.342781   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.364991   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.373848   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.555572   46141 ssh_runner.go:195] Run: systemctl --version
	I1202 19:24:27.562641   46141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:24:27.610413   46141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:24:27.614481   46141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:24:27.614543   46141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:24:27.622250   46141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:24:27.622263   46141 start.go:496] detecting cgroup driver to use...
	I1202 19:24:27.622291   46141 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:24:27.622334   46141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:24:27.637407   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:24:27.650559   46141 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:24:27.650610   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:24:27.665862   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:24:27.678201   46141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:24:27.787007   46141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:24:27.899090   46141 docker.go:234] disabling docker service ...
	I1202 19:24:27.899177   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:24:27.914485   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:24:27.927681   46141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:24:28.045412   46141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:24:28.177124   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:24:28.189334   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:24:28.202961   46141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:24:28.203015   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.211343   46141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:24:28.211423   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.219933   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.227929   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.236036   46141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:24:28.243301   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.251359   46141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.259074   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.267235   46141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:24:28.274309   46141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:24:28.280789   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.409376   46141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:24:28.552601   46141 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:24:28.552676   46141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:24:28.556545   46141 start.go:564] Will wait 60s for crictl version
	I1202 19:24:28.556594   46141 ssh_runner.go:195] Run: which crictl
	I1202 19:24:28.560016   46141 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:24:28.584096   46141 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:24:28.584179   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.612035   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.644724   46141 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:24:28.647719   46141 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:24:28.663830   46141 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:24:28.670469   46141 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 19:24:28.673257   46141 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:24:28.673378   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:28.673715   46141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:24:28.712979   46141 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:24:28.712990   46141 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:24:28.712996   46141 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:24:28.713091   46141 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:24:28.713167   46141 ssh_runner.go:195] Run: crio config
	I1202 19:24:28.766896   46141 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 19:24:28.766918   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:28.766927   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:28.766941   46141 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:24:28.766963   46141 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:24:28.767080   46141 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:24:28.767147   46141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:24:28.774515   46141 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:24:28.774573   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:24:28.781818   46141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:24:28.793879   46141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:24:28.805690   46141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 19:24:28.818120   46141 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:24:28.821584   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.923612   46141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:24:29.044163   46141 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:24:29.044174   46141 certs.go:195] generating shared ca certs ...
	I1202 19:24:29.044188   46141 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:24:29.044325   46141 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:24:29.044362   46141 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:24:29.044367   46141 certs.go:257] generating profile certs ...
	I1202 19:24:29.044449   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:24:29.044505   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:24:29.044543   46141 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:24:29.044646   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:24:29.044677   46141 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:24:29.044683   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:24:29.044708   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:24:29.044730   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:24:29.044752   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:24:29.044793   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:29.045393   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:24:29.065539   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:24:29.085818   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:24:29.107933   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:24:29.124745   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:24:29.141714   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:24:29.158359   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:24:29.174925   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:24:29.191660   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:24:29.208637   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:24:29.226113   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:24:29.242250   46141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:24:29.254421   46141 ssh_runner.go:195] Run: openssl version
	I1202 19:24:29.260244   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:24:29.267946   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271417   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271472   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.312066   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:24:29.319673   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:24:29.327613   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331149   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331213   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.371529   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:24:29.378966   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:24:29.386811   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390484   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390535   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.430996   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:24:29.438578   46141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:24:29.442282   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:24:29.482760   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:24:29.523856   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:24:29.564389   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:24:29.604810   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:24:29.645380   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:24:29.687886   46141 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:29.687963   46141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:24:29.688021   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.717432   46141 cri.go:89] found id: ""
	I1202 19:24:29.717490   46141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:24:29.725067   46141 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:24:29.725077   46141 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:24:29.725126   46141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:24:29.732065   46141 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.732614   46141 kubeconfig.go:125] found "functional-374330" server: "https://192.168.49.2:8441"
	I1202 19:24:29.734000   46141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:24:29.741333   46141 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 19:09:53.796915722 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 19:24:28.810106590 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 19:24:29.741350   46141 kubeadm.go:1161] stopping kube-system containers ...
	I1202 19:24:29.741369   46141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 19:24:29.741422   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.768496   46141 cri.go:89] found id: ""
	I1202 19:24:29.768555   46141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 19:24:29.784309   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:24:29.792418   46141 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  2 19:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 19:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  2 19:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 19:14 /etc/kubernetes/scheduler.conf
	
	I1202 19:24:29.792472   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:24:29.800190   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:24:29.807339   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.807391   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:24:29.814250   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.821376   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.821427   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.828870   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:24:29.836580   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.836638   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:24:29.843919   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:24:29.851701   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:29.899912   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.003595   46141 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.103659313s)
	I1202 19:24:31.003654   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.210419   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.280327   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.324104   46141 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:24:31.324170   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:31.824388   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.324845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.825182   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.824654   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.325193   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.825112   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.324714   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.824303   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.324356   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.824683   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.324294   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.824358   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.324922   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.824376   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.324270   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.825008   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.324553   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.824838   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.325254   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.824311   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.324452   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.824362   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.325153   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.824379   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.324948   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.824287   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.325093   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.824914   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.324315   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.825135   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.324688   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.824319   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.325046   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.824341   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.324306   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.824985   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.324502   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.825062   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.325159   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.824329   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.324431   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.824365   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.324584   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.824229   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.324898   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.825268   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.324621   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.824623   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.325215   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.824326   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.324724   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.824643   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.325213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.824317   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.324263   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.824993   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.324689   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.824372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.324768   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.824973   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.324385   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.824324   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.325090   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.824792   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.825092   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.324727   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.825067   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.325261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.824374   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.825117   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.824931   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.824858   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.324555   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.824370   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.824824   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.325272   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.824975   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.324579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.824349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.324992   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.824471   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.325189   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.824307   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.324299   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.824860   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.324477   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.824853   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.324910   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.825002   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.324312   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.824665   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.324238   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.824261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.325216   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.824750   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.324310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.825285   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.325114   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.824701   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.324390   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.825161   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.325162   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.824364   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.324725   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.825185   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.324377   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.825213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.324403   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.824310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.324960   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.824818   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.325151   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.824591   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:31.324373   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:31.324449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:31.353616   46141 cri.go:89] found id: ""
	I1202 19:25:31.353629   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.353636   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:31.353642   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:31.353718   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:31.378636   46141 cri.go:89] found id: ""
	I1202 19:25:31.378649   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.378656   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:31.378661   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:31.378716   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:31.403292   46141 cri.go:89] found id: ""
	I1202 19:25:31.403305   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.403312   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:31.403317   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:31.403371   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:31.427054   46141 cri.go:89] found id: ""
	I1202 19:25:31.427067   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.427074   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:31.427079   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:31.427133   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:31.451516   46141 cri.go:89] found id: ""
	I1202 19:25:31.451529   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.451536   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:31.451541   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:31.451595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:31.474863   46141 cri.go:89] found id: ""
	I1202 19:25:31.474876   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.474889   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:31.474895   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:31.474967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:31.499414   46141 cri.go:89] found id: ""
	I1202 19:25:31.499427   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.499434   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:31.499442   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:31.499454   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:31.563997   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:31.564014   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:31.575066   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:31.575080   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:31.644130   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:31.644152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:31.644164   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:31.720566   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:31.720584   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:34.247873   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:34.257765   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:34.257820   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:34.284109   46141 cri.go:89] found id: ""
	I1202 19:25:34.284122   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.284129   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:34.284134   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:34.284185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:34.322934   46141 cri.go:89] found id: ""
	I1202 19:25:34.322947   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.322954   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:34.322959   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:34.323011   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:34.356765   46141 cri.go:89] found id: ""
	I1202 19:25:34.356778   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.356785   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:34.356790   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:34.356843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:34.383799   46141 cri.go:89] found id: ""
	I1202 19:25:34.383811   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.383818   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:34.383824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:34.383875   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:34.407104   46141 cri.go:89] found id: ""
	I1202 19:25:34.407117   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.407133   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:34.407139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:34.407207   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:34.431504   46141 cri.go:89] found id: ""
	I1202 19:25:34.431517   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.431523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:34.431529   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:34.431624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:34.459463   46141 cri.go:89] found id: ""
	I1202 19:25:34.459477   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.459484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:34.459492   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:34.459503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:34.524752   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:34.524770   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:34.537010   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:34.537025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:34.599686   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:34.599696   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:34.599708   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:34.676464   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:34.676483   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.209911   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:37.219636   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:37.219691   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:37.243765   46141 cri.go:89] found id: ""
	I1202 19:25:37.243778   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.243785   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:37.243790   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:37.243842   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:37.272015   46141 cri.go:89] found id: ""
	I1202 19:25:37.272028   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.272035   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:37.272040   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:37.272096   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:37.296807   46141 cri.go:89] found id: ""
	I1202 19:25:37.296819   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.296835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:37.296840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:37.296893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:37.327436   46141 cri.go:89] found id: ""
	I1202 19:25:37.327449   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.327456   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:37.327461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:37.327515   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:37.362906   46141 cri.go:89] found id: ""
	I1202 19:25:37.362919   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.362926   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:37.362931   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:37.362985   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:37.386876   46141 cri.go:89] found id: ""
	I1202 19:25:37.386889   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.386896   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:37.386902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:37.386976   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:37.410131   46141 cri.go:89] found id: ""
	I1202 19:25:37.410144   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.410151   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:37.410158   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:37.410169   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:37.420302   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:37.420317   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:37.483848   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:37.483857   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:37.483867   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:37.562871   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:37.562889   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.593595   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:37.593609   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.162349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:40.172453   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:40.172514   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:40.199726   46141 cri.go:89] found id: ""
	I1202 19:25:40.199756   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.199763   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:40.199768   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:40.199825   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:40.229015   46141 cri.go:89] found id: ""
	I1202 19:25:40.229029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.229037   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:40.229042   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:40.229097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:40.255016   46141 cri.go:89] found id: ""
	I1202 19:25:40.255029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.255036   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:40.255041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:40.255104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:40.280314   46141 cri.go:89] found id: ""
	I1202 19:25:40.280337   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.280343   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:40.280349   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:40.280409   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:40.317261   46141 cri.go:89] found id: ""
	I1202 19:25:40.317275   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.317281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:40.317286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:40.317351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:40.350568   46141 cri.go:89] found id: ""
	I1202 19:25:40.350581   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.350588   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:40.350602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:40.350655   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:40.376758   46141 cri.go:89] found id: ""
	I1202 19:25:40.376772   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.376786   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:40.376794   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:40.376805   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:40.452695   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:40.452719   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:40.478860   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:40.478875   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.558280   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:40.558307   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:40.569138   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:40.569159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:40.633967   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.135632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:43.145532   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:43.145592   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:43.170325   46141 cri.go:89] found id: ""
	I1202 19:25:43.170338   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.170345   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:43.170372   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:43.170432   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:43.194956   46141 cri.go:89] found id: ""
	I1202 19:25:43.194970   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.194977   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:43.194982   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:43.195039   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:43.221778   46141 cri.go:89] found id: ""
	I1202 19:25:43.221792   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.221800   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:43.221805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:43.221862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:43.248205   46141 cri.go:89] found id: ""
	I1202 19:25:43.248218   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.248225   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:43.248230   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:43.248283   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:43.275958   46141 cri.go:89] found id: ""
	I1202 19:25:43.275971   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.275979   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:43.275984   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:43.276040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:43.311994   46141 cri.go:89] found id: ""
	I1202 19:25:43.312006   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.312013   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:43.312018   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:43.312070   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:43.338867   46141 cri.go:89] found id: ""
	I1202 19:25:43.338881   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.338888   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:43.338896   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:43.338907   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:43.370951   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:43.370966   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:43.439006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:43.439023   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:43.449811   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:43.449827   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:43.523274   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.523283   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:43.523293   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.099316   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:46.109738   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:46.109799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:46.135973   46141 cri.go:89] found id: ""
	I1202 19:25:46.135986   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.135993   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:46.135998   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:46.136053   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:46.160433   46141 cri.go:89] found id: ""
	I1202 19:25:46.160447   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.160454   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:46.160459   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:46.160562   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:46.185345   46141 cri.go:89] found id: ""
	I1202 19:25:46.185358   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.185365   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:46.185371   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:46.185431   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:46.209708   46141 cri.go:89] found id: ""
	I1202 19:25:46.209721   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.209728   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:46.209733   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:46.209799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:46.234274   46141 cri.go:89] found id: ""
	I1202 19:25:46.234288   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.234294   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:46.234299   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:46.234363   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:46.259257   46141 cri.go:89] found id: ""
	I1202 19:25:46.259271   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.259277   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:46.259282   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:46.259336   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:46.282587   46141 cri.go:89] found id: ""
	I1202 19:25:46.282601   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.282607   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:46.282620   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:46.282630   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:46.360010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:46.360029   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:46.360040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.435864   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:46.435883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:46.464582   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:46.464597   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:46.531766   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:46.531784   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.042500   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:49.053773   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:49.053830   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:49.079262   46141 cri.go:89] found id: ""
	I1202 19:25:49.079276   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.079282   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:49.079288   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:49.079342   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:49.104725   46141 cri.go:89] found id: ""
	I1202 19:25:49.104738   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.104745   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:49.104759   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:49.104814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:49.133788   46141 cri.go:89] found id: ""
	I1202 19:25:49.133801   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.133808   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:49.133824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:49.133880   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:49.159349   46141 cri.go:89] found id: ""
	I1202 19:25:49.159371   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.159379   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:49.159384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:49.159443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:49.197548   46141 cri.go:89] found id: ""
	I1202 19:25:49.197562   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.197569   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:49.197574   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:49.197641   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:49.223472   46141 cri.go:89] found id: ""
	I1202 19:25:49.223485   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.223492   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:49.223498   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:49.223558   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:49.247894   46141 cri.go:89] found id: ""
	I1202 19:25:49.247921   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.247929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:49.247936   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:49.247949   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:49.331462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:49.331482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:49.370297   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:49.370316   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:49.439052   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:49.439071   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.449975   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:49.449991   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:49.513463   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.015209   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:52.026897   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:52.026956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:52.053387   46141 cri.go:89] found id: ""
	I1202 19:25:52.053401   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.053408   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:52.053416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:52.053475   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:52.079773   46141 cri.go:89] found id: ""
	I1202 19:25:52.079787   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.079793   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:52.079799   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:52.079854   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:52.107526   46141 cri.go:89] found id: ""
	I1202 19:25:52.107539   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.107546   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:52.107551   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:52.107610   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:52.134040   46141 cri.go:89] found id: ""
	I1202 19:25:52.134054   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.134061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:52.134066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:52.134124   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:52.160401   46141 cri.go:89] found id: ""
	I1202 19:25:52.160421   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.160445   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:52.160450   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:52.160512   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:52.186015   46141 cri.go:89] found id: ""
	I1202 19:25:52.186029   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.186035   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:52.186041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:52.186097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:52.211315   46141 cri.go:89] found id: ""
	I1202 19:25:52.211328   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.211335   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:52.211342   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:52.211352   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:52.281330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:52.281350   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:52.294618   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:52.294634   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:52.375867   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.375884   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:52.375895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:52.454410   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:52.454433   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:54.985073   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:54.997287   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:54.997351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:55.033193   46141 cri.go:89] found id: ""
	I1202 19:25:55.033207   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.033214   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:55.033220   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:55.033285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:55.059947   46141 cri.go:89] found id: ""
	I1202 19:25:55.059961   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.059968   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:55.059973   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:55.060032   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:55.089719   46141 cri.go:89] found id: ""
	I1202 19:25:55.089731   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.089738   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:55.089744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:55.089804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:55.116791   46141 cri.go:89] found id: ""
	I1202 19:25:55.116805   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.116811   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:55.116816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:55.116872   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:55.144575   46141 cri.go:89] found id: ""
	I1202 19:25:55.144589   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.144597   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:55.144602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:55.144663   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:55.170532   46141 cri.go:89] found id: ""
	I1202 19:25:55.170546   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.170553   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:55.170558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:55.170613   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:55.201295   46141 cri.go:89] found id: ""
	I1202 19:25:55.201309   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.201317   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:55.201324   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:55.201335   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:55.265951   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:55.265968   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:55.276457   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:55.276472   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:55.358449   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:55.358470   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:55.358481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:55.438382   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:55.438401   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:57.969884   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:57.980234   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:57.980287   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:58.005151   46141 cri.go:89] found id: ""
	I1202 19:25:58.005165   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.005172   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:58.005177   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:58.005234   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:58.032254   46141 cri.go:89] found id: ""
	I1202 19:25:58.032267   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.032274   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:58.032279   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:58.032338   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:58.058556   46141 cri.go:89] found id: ""
	I1202 19:25:58.058570   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.058578   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:58.058583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:58.058640   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:58.084123   46141 cri.go:89] found id: ""
	I1202 19:25:58.084136   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.084143   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:58.084148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:58.084204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:58.110792   46141 cri.go:89] found id: ""
	I1202 19:25:58.110806   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.110812   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:58.110820   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:58.110877   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:58.136499   46141 cri.go:89] found id: ""
	I1202 19:25:58.136512   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.136519   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:58.136524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:58.136585   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:58.162083   46141 cri.go:89] found id: ""
	I1202 19:25:58.162096   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.162104   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:58.162111   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:58.162121   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:58.223736   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:58.223745   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:58.223756   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:58.308033   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:58.308051   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:58.341126   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:58.341141   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:58.407826   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:58.407843   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:00.920333   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:00.930302   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:00.930359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:00.954390   46141 cri.go:89] found id: ""
	I1202 19:26:00.954404   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.954411   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:00.954416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:00.954483   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:00.980266   46141 cri.go:89] found id: ""
	I1202 19:26:00.980280   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.980287   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:00.980292   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:00.980360   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:01.008460   46141 cri.go:89] found id: ""
	I1202 19:26:01.008482   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.008488   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:01.008493   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:01.008547   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:01.036672   46141 cri.go:89] found id: ""
	I1202 19:26:01.036686   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.036692   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:01.036698   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:01.036753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:01.061548   46141 cri.go:89] found id: ""
	I1202 19:26:01.061562   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.061568   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:01.061573   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:01.061629   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:01.086617   46141 cri.go:89] found id: ""
	I1202 19:26:01.086631   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.086638   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:01.086643   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:01.086701   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:01.111676   46141 cri.go:89] found id: ""
	I1202 19:26:01.111690   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.111697   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:01.111704   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:01.111714   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:01.176991   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:01.177017   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:01.188305   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:01.188339   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:01.254955   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:01.254966   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:01.254977   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:01.336825   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:01.336852   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:03.866716   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:03.876694   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:03.876752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:03.900150   46141 cri.go:89] found id: ""
	I1202 19:26:03.900164   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.900170   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:03.900176   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:03.900231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:03.928045   46141 cri.go:89] found id: ""
	I1202 19:26:03.928059   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.928066   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:03.928071   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:03.928128   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:03.952359   46141 cri.go:89] found id: ""
	I1202 19:26:03.952372   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.952379   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:03.952384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:03.952439   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:03.977113   46141 cri.go:89] found id: ""
	I1202 19:26:03.977127   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.977134   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:03.977139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:03.977195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:04.001871   46141 cri.go:89] found id: ""
	I1202 19:26:04.001884   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.001890   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:04.001896   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:04.001950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:04.029122   46141 cri.go:89] found id: ""
	I1202 19:26:04.029136   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.029143   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:04.029148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:04.029206   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:04.059191   46141 cri.go:89] found id: ""
	I1202 19:26:04.059205   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.059212   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:04.059219   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:04.059228   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:04.125149   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:04.125166   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:04.136144   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:04.136159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:04.198077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:04.198088   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:04.198098   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:04.273217   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:04.273235   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:06.807224   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:06.817250   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:06.817318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:06.845880   46141 cri.go:89] found id: ""
	I1202 19:26:06.845895   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.845902   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:06.845908   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:06.845963   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:06.870846   46141 cri.go:89] found id: ""
	I1202 19:26:06.870859   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.870866   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:06.870871   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:06.870927   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:06.896774   46141 cri.go:89] found id: ""
	I1202 19:26:06.896788   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.896794   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:06.896800   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:06.896857   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:06.924394   46141 cri.go:89] found id: ""
	I1202 19:26:06.924407   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.924414   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:06.924419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:06.924477   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:06.951775   46141 cri.go:89] found id: ""
	I1202 19:26:06.951789   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.951796   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:06.951804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:06.951865   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:06.976656   46141 cri.go:89] found id: ""
	I1202 19:26:06.976674   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.976682   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:06.976687   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:06.976743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:07.002712   46141 cri.go:89] found id: ""
	I1202 19:26:07.002726   46141 logs.go:282] 0 containers: []
	W1202 19:26:07.002741   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:07.002753   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:07.002764   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:07.071978   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:07.071988   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:07.072001   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:07.148506   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:07.148525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:07.177526   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:07.177542   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:07.244597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:07.244614   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:09.755980   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:09.766062   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:09.766136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:09.791272   46141 cri.go:89] found id: ""
	I1202 19:26:09.791285   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.791292   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:09.791297   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:09.791352   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:09.819809   46141 cri.go:89] found id: ""
	I1202 19:26:09.819822   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.819829   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:09.819834   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:09.819890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:09.845138   46141 cri.go:89] found id: ""
	I1202 19:26:09.845151   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.845158   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:09.845163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:09.845233   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:09.869181   46141 cri.go:89] found id: ""
	I1202 19:26:09.869194   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.869201   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:09.869215   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:09.869269   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:09.894166   46141 cri.go:89] found id: ""
	I1202 19:26:09.894180   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.894187   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:09.894192   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:09.894246   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:09.918581   46141 cri.go:89] found id: ""
	I1202 19:26:09.918594   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.918601   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:09.918606   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:09.918670   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:09.943199   46141 cri.go:89] found id: ""
	I1202 19:26:09.943213   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.943219   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:09.943227   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:09.943238   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:10.008528   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:10.008545   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:10.019265   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:10.019283   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:10.097788   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:10.097798   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:10.097814   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:10.175343   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:10.175361   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:12.705105   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:12.714930   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:12.714992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:12.738794   46141 cri.go:89] found id: ""
	I1202 19:26:12.738808   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.738814   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:12.738819   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:12.738893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:12.763061   46141 cri.go:89] found id: ""
	I1202 19:26:12.763074   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.763088   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:12.763094   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:12.763147   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:12.789884   46141 cri.go:89] found id: ""
	I1202 19:26:12.789897   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.789904   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:12.789909   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:12.789967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:12.815897   46141 cri.go:89] found id: ""
	I1202 19:26:12.815911   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.815918   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:12.815923   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:12.815980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:12.842434   46141 cri.go:89] found id: ""
	I1202 19:26:12.842448   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.842455   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:12.842461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:12.842521   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:12.867046   46141 cri.go:89] found id: ""
	I1202 19:26:12.867059   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.867066   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:12.867071   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:12.867136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:12.891464   46141 cri.go:89] found id: ""
	I1202 19:26:12.891478   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.891484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:12.891492   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:12.891503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:12.902121   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:12.902136   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:12.963892   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:12.963902   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:12.963913   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:13.043923   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:13.043944   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:13.073893   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:13.073909   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:15.646846   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:15.656672   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:15.656727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:15.685223   46141 cri.go:89] found id: ""
	I1202 19:26:15.685236   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.685243   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:15.685249   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:15.685309   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:15.710499   46141 cri.go:89] found id: ""
	I1202 19:26:15.710513   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.710520   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:15.710526   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:15.710582   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:15.734748   46141 cri.go:89] found id: ""
	I1202 19:26:15.734762   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.734775   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:15.734780   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:15.734833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:15.759539   46141 cri.go:89] found id: ""
	I1202 19:26:15.759551   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.759558   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:15.759564   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:15.759617   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:15.788358   46141 cri.go:89] found id: ""
	I1202 19:26:15.788371   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.788378   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:15.788383   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:15.788443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:15.813365   46141 cri.go:89] found id: ""
	I1202 19:26:15.813379   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.813386   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:15.813391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:15.813445   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:15.842535   46141 cri.go:89] found id: ""
	I1202 19:26:15.842550   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.842558   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:15.842565   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:15.842576   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:15.853891   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:15.853906   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:15.921614   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:15.921632   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:15.921643   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:15.997309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:15.997326   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:16.029023   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:16.029039   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.596080   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:18.605748   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:18.605804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:18.630525   46141 cri.go:89] found id: ""
	I1202 19:26:18.630539   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.630546   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:18.630551   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:18.630608   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:18.655399   46141 cri.go:89] found id: ""
	I1202 19:26:18.655412   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.655419   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:18.655425   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:18.655479   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:18.681041   46141 cri.go:89] found id: ""
	I1202 19:26:18.681054   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.681061   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:18.681067   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:18.681123   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:18.710155   46141 cri.go:89] found id: ""
	I1202 19:26:18.710168   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.710181   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:18.710187   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:18.710241   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:18.735242   46141 cri.go:89] found id: ""
	I1202 19:26:18.735256   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.735263   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:18.735268   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:18.735327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:18.761061   46141 cri.go:89] found id: ""
	I1202 19:26:18.761074   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.761081   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:18.761087   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:18.761149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:18.788428   46141 cri.go:89] found id: ""
	I1202 19:26:18.788441   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.788448   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:18.788456   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:18.788475   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:18.822471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:18.822487   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.888827   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:18.888844   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:18.899937   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:18.899952   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:18.968344   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:18.968353   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:18.968365   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.544554   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:21.555728   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:21.555784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:21.584623   46141 cri.go:89] found id: ""
	I1202 19:26:21.584639   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.584646   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:21.584650   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:21.584710   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:21.614647   46141 cri.go:89] found id: ""
	I1202 19:26:21.614660   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.614668   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:21.614672   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:21.614731   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:21.642925   46141 cri.go:89] found id: ""
	I1202 19:26:21.642938   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.642945   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:21.642950   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:21.643003   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:21.668180   46141 cri.go:89] found id: ""
	I1202 19:26:21.668194   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.668202   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:21.668207   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:21.668263   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:21.693295   46141 cri.go:89] found id: ""
	I1202 19:26:21.693308   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.693315   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:21.693321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:21.693375   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:21.720442   46141 cri.go:89] found id: ""
	I1202 19:26:21.720456   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.720463   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:21.720477   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:21.720550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:21.745858   46141 cri.go:89] found id: ""
	I1202 19:26:21.745872   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.745879   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:21.745887   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:21.745898   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.821815   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:21.821832   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:21.852228   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:21.852243   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:21.925590   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:21.925615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:21.936630   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:21.936646   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:22.000893   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:24.501139   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:24.511236   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:24.511298   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:24.536070   46141 cri.go:89] found id: ""
	I1202 19:26:24.536084   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.536091   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:24.536096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:24.536152   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:24.570105   46141 cri.go:89] found id: ""
	I1202 19:26:24.570118   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.570125   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:24.570131   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:24.570195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:24.602200   46141 cri.go:89] found id: ""
	I1202 19:26:24.602213   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.602220   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:24.602225   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:24.602286   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:24.627716   46141 cri.go:89] found id: ""
	I1202 19:26:24.627730   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.627737   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:24.627743   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:24.627799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:24.653555   46141 cri.go:89] found id: ""
	I1202 19:26:24.653568   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.653575   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:24.653580   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:24.653638   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:24.681296   46141 cri.go:89] found id: ""
	I1202 19:26:24.681310   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.681316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:24.681322   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:24.681376   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:24.707692   46141 cri.go:89] found id: ""
	I1202 19:26:24.707705   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.707714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:24.707721   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:24.707731   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:24.782015   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:24.782033   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:24.809710   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:24.809725   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:24.880042   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:24.880061   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:24.890565   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:24.890580   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:24.952416   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.452632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:27.462873   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:27.462933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:27.487753   46141 cri.go:89] found id: ""
	I1202 19:26:27.487766   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.487773   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:27.487778   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:27.487835   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:27.512748   46141 cri.go:89] found id: ""
	I1202 19:26:27.512762   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.512771   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:27.512776   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:27.512833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:27.542024   46141 cri.go:89] found id: ""
	I1202 19:26:27.542038   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.542045   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:27.542051   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:27.542109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:27.579960   46141 cri.go:89] found id: ""
	I1202 19:26:27.579973   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.579979   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:27.579989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:27.580045   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:27.608229   46141 cri.go:89] found id: ""
	I1202 19:26:27.608242   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.608250   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:27.608255   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:27.608318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:27.634613   46141 cri.go:89] found id: ""
	I1202 19:26:27.634626   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.634633   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:27.634639   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:27.634695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:27.659548   46141 cri.go:89] found id: ""
	I1202 19:26:27.659562   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.659569   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:27.659576   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:27.659587   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:27.727694   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.727704   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:27.727715   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:27.802309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:27.802327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:27.831471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:27.831486   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:27.899227   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:27.899244   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.413752   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:30.423684   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:30.423741   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:30.447673   46141 cri.go:89] found id: ""
	I1202 19:26:30.447688   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.447695   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:30.447706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:30.447762   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:30.473178   46141 cri.go:89] found id: ""
	I1202 19:26:30.473191   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.473198   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:30.473203   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:30.473258   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:30.499098   46141 cri.go:89] found id: ""
	I1202 19:26:30.499112   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.499119   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:30.499124   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:30.499181   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:30.528083   46141 cri.go:89] found id: ""
	I1202 19:26:30.528096   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.528103   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:30.528108   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:30.528165   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:30.562772   46141 cri.go:89] found id: ""
	I1202 19:26:30.562784   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.562791   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:30.562796   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:30.562852   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:30.592139   46141 cri.go:89] found id: ""
	I1202 19:26:30.592152   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.592158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:30.592163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:30.592217   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:30.624862   46141 cri.go:89] found id: ""
	I1202 19:26:30.624875   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.624882   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:30.624889   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:30.624901   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.636356   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:30.636374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:30.698721   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:30.698731   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:30.698745   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:30.775221   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:30.775240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:30.812702   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:30.812718   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.383460   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:33.393252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:33.393318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:33.417381   46141 cri.go:89] found id: ""
	I1202 19:26:33.417394   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.417401   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:33.417407   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:33.417467   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:33.441554   46141 cri.go:89] found id: ""
	I1202 19:26:33.441567   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.441574   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:33.441580   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:33.441633   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:33.466601   46141 cri.go:89] found id: ""
	I1202 19:26:33.466615   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.466621   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:33.466627   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:33.466680   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:33.494897   46141 cri.go:89] found id: ""
	I1202 19:26:33.494910   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.494917   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:33.494922   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:33.494978   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:33.519464   46141 cri.go:89] found id: ""
	I1202 19:26:33.519478   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.519485   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:33.519490   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:33.519549   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:33.556189   46141 cri.go:89] found id: ""
	I1202 19:26:33.556203   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.556210   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:33.556216   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:33.556276   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:33.592420   46141 cri.go:89] found id: ""
	I1202 19:26:33.592436   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.592442   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:33.592459   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:33.592469   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:33.669109   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:33.669128   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:33.703954   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:33.703970   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.773221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:33.773240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:33.784054   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:33.784068   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:33.846758   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:36.347013   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:36.357404   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:36.357461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:36.383307   46141 cri.go:89] found id: ""
	I1202 19:26:36.383322   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.383330   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:36.383336   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:36.383391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:36.409566   46141 cri.go:89] found id: ""
	I1202 19:26:36.409580   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.409588   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:36.409593   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:36.409682   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:36.435280   46141 cri.go:89] found id: ""
	I1202 19:26:36.435294   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.435300   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:36.435306   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:36.435366   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:36.460290   46141 cri.go:89] found id: ""
	I1202 19:26:36.460304   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.460310   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:36.460316   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:36.460368   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:36.484719   46141 cri.go:89] found id: ""
	I1202 19:26:36.484733   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.484740   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:36.484746   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:36.484800   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:36.510020   46141 cri.go:89] found id: ""
	I1202 19:26:36.510034   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.510042   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:36.510048   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:36.510106   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:36.536500   46141 cri.go:89] found id: ""
	I1202 19:26:36.536515   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.536521   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:36.536529   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:36.536539   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:36.616617   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:36.616636   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:36.647169   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:36.647185   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:36.711768   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:36.711787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:36.723184   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:36.723200   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:36.795174   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:39.296074   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:39.306024   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:39.306085   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:39.335889   46141 cri.go:89] found id: ""
	I1202 19:26:39.335915   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.335923   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:39.335928   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:39.335990   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:39.361424   46141 cri.go:89] found id: ""
	I1202 19:26:39.361438   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.361445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:39.361450   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:39.361505   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:39.387900   46141 cri.go:89] found id: ""
	I1202 19:26:39.387913   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.387920   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:39.387925   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:39.387988   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:39.413856   46141 cri.go:89] found id: ""
	I1202 19:26:39.413871   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.413878   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:39.413884   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:39.413938   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:39.439194   46141 cri.go:89] found id: ""
	I1202 19:26:39.439208   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.439215   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:39.439221   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:39.439278   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:39.465337   46141 cri.go:89] found id: ""
	I1202 19:26:39.465351   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.465359   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:39.465375   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:39.465442   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:39.493124   46141 cri.go:89] found id: ""
	I1202 19:26:39.493137   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.493144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:39.493152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:39.493162   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:39.573759   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:39.573780   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:39.608655   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:39.608671   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:39.681483   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:39.681503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:39.692678   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:39.692693   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:39.753005   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:42.253264   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:42.266584   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:42.266662   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:42.301576   46141 cri.go:89] found id: ""
	I1202 19:26:42.301591   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.301599   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:42.301605   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:42.301727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:42.360247   46141 cri.go:89] found id: ""
	I1202 19:26:42.360262   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.360269   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:42.360275   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:42.360344   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:42.390741   46141 cri.go:89] found id: ""
	I1202 19:26:42.390756   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.390766   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:42.390776   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:42.390853   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:42.419121   46141 cri.go:89] found id: ""
	I1202 19:26:42.419137   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.419144   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:42.419152   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:42.419225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:42.446778   46141 cri.go:89] found id: ""
	I1202 19:26:42.446792   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.446811   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:42.446816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:42.446884   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:42.472520   46141 cri.go:89] found id: ""
	I1202 19:26:42.472534   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.472541   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:42.472546   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:42.472603   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:42.498770   46141 cri.go:89] found id: ""
	I1202 19:26:42.498783   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.498789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:42.498797   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:42.498806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:42.579006   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:42.579025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:42.609942   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:42.609958   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:42.683995   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:42.684022   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:42.695018   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:42.695038   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:42.757205   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.257372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:45.279258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:45.279391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:45.324360   46141 cri.go:89] found id: ""
	I1202 19:26:45.324374   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.324382   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:45.324389   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:45.324461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:45.357406   46141 cri.go:89] found id: ""
	I1202 19:26:45.357438   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.357445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:45.357451   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:45.357520   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:45.390814   46141 cri.go:89] found id: ""
	I1202 19:26:45.390829   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.390836   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:45.390842   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:45.390910   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:45.422248   46141 cri.go:89] found id: ""
	I1202 19:26:45.422262   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.422269   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:45.422274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:45.422331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:45.447593   46141 cri.go:89] found id: ""
	I1202 19:26:45.447607   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.447614   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:45.447618   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:45.447669   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:45.473750   46141 cri.go:89] found id: ""
	I1202 19:26:45.473763   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.473770   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:45.473775   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:45.473838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:45.502345   46141 cri.go:89] found id: ""
	I1202 19:26:45.502358   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.502364   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:45.502373   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:45.502383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:45.569300   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:45.569319   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:45.581070   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:45.581086   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:45.647631   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.647641   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:45.647652   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:45.722681   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:45.722699   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:48.249966   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:48.259729   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:48.259788   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:48.284968   46141 cri.go:89] found id: ""
	I1202 19:26:48.284981   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.284995   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:48.285001   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:48.285058   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:48.312117   46141 cri.go:89] found id: ""
	I1202 19:26:48.312131   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.312138   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:48.312143   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:48.312196   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:48.338030   46141 cri.go:89] found id: ""
	I1202 19:26:48.338044   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.338050   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:48.338055   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:48.338108   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:48.363655   46141 cri.go:89] found id: ""
	I1202 19:26:48.363668   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.363675   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:48.363680   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:48.363732   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:48.388544   46141 cri.go:89] found id: ""
	I1202 19:26:48.388565   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.388572   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:48.388577   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:48.388631   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:48.413919   46141 cri.go:89] found id: ""
	I1202 19:26:48.413932   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.413939   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:48.413962   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:48.414018   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:48.438768   46141 cri.go:89] found id: ""
	I1202 19:26:48.438782   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.438789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:48.438796   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:48.438806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:48.508480   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:48.508498   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:48.519336   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:48.519354   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:48.612485   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:48.612495   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:48.612505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:48.689541   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:48.689559   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.220741   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:51.230995   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:51.231052   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:51.257767   46141 cri.go:89] found id: ""
	I1202 19:26:51.257786   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.257794   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:51.257801   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:51.257856   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:51.282338   46141 cri.go:89] found id: ""
	I1202 19:26:51.282351   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.282358   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:51.282363   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:51.282425   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:51.311031   46141 cri.go:89] found id: ""
	I1202 19:26:51.311044   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.311051   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:51.311056   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:51.311111   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:51.339385   46141 cri.go:89] found id: ""
	I1202 19:26:51.339399   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.339405   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:51.339410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:51.339476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:51.368365   46141 cri.go:89] found id: ""
	I1202 19:26:51.368379   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.368386   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:51.368391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:51.368455   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:51.393598   46141 cri.go:89] found id: ""
	I1202 19:26:51.393611   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.393618   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:51.393623   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:51.393696   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:51.423516   46141 cri.go:89] found id: ""
	I1202 19:26:51.423529   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.423536   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:51.423543   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:51.423553   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:51.488010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:51.488020   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:51.488031   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:51.568503   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:51.568521   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.604611   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:51.604626   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:51.673166   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:51.673184   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:54.184676   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:54.194875   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:54.194933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:54.219830   46141 cri.go:89] found id: ""
	I1202 19:26:54.219850   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.219857   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:54.219863   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:54.219922   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:54.245201   46141 cri.go:89] found id: ""
	I1202 19:26:54.245214   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.245221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:54.245228   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:54.245295   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:54.270718   46141 cri.go:89] found id: ""
	I1202 19:26:54.270732   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.270739   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:54.270744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:54.270799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:54.295488   46141 cri.go:89] found id: ""
	I1202 19:26:54.295501   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.295508   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:54.295513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:54.295568   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:54.320597   46141 cri.go:89] found id: ""
	I1202 19:26:54.320610   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.320617   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:54.320622   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:54.320675   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:54.348002   46141 cri.go:89] found id: ""
	I1202 19:26:54.348017   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.348024   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:54.348029   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:54.348089   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:54.374189   46141 cri.go:89] found id: ""
	I1202 19:26:54.374203   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.374209   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:54.374217   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:54.374229   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:54.439569   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:54.439581   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:54.439594   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:54.524214   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:54.524233   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:54.564820   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:54.564841   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:54.639908   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:54.639928   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.151760   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:57.161952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:57.162007   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:57.186061   46141 cri.go:89] found id: ""
	I1202 19:26:57.186074   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.186081   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:57.186087   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:57.186144   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:57.211829   46141 cri.go:89] found id: ""
	I1202 19:26:57.211843   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.211850   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:57.211856   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:57.211914   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:57.237584   46141 cri.go:89] found id: ""
	I1202 19:26:57.237598   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.237605   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:57.237610   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:57.237697   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:57.266726   46141 cri.go:89] found id: ""
	I1202 19:26:57.266740   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.266746   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:57.266752   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:57.266810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:57.293971   46141 cri.go:89] found id: ""
	I1202 19:26:57.293984   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.293991   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:57.293996   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:57.294050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:57.322602   46141 cri.go:89] found id: ""
	I1202 19:26:57.322615   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.322622   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:57.322628   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:57.322685   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:57.347221   46141 cri.go:89] found id: ""
	I1202 19:26:57.347234   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.347249   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:57.347257   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:57.347267   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.358475   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:57.358490   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:57.420357   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:57.420367   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:57.420378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:57.498037   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:57.498057   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:57.530853   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:57.530870   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.105404   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:00.167692   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:00.167773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:00.310630   46141 cri.go:89] found id: ""
	I1202 19:27:00.310644   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.310652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:00.310659   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:00.310726   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:00.379652   46141 cri.go:89] found id: ""
	I1202 19:27:00.379665   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.379673   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:00.379678   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:00.379740   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:00.417470   46141 cri.go:89] found id: ""
	I1202 19:27:00.417487   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.417496   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:00.417501   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:00.417571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:00.459129   46141 cri.go:89] found id: ""
	I1202 19:27:00.459144   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.459151   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:00.459157   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:00.459225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:00.491958   46141 cri.go:89] found id: ""
	I1202 19:27:00.491973   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.491980   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:00.491986   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:00.492050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:00.522076   46141 cri.go:89] found id: ""
	I1202 19:27:00.522091   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.522098   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:00.522110   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:00.522185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:00.560640   46141 cri.go:89] found id: ""
	I1202 19:27:00.560654   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.560661   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:00.560668   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:00.560677   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:00.652444   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:00.652464   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:00.684426   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:00.684441   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.751419   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:00.751437   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:00.763771   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:00.763786   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:00.826022   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.326866   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:03.336590   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:03.336644   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:03.361031   46141 cri.go:89] found id: ""
	I1202 19:27:03.361045   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.361051   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:03.361057   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:03.361109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:03.385187   46141 cri.go:89] found id: ""
	I1202 19:27:03.385201   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.385208   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:03.385214   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:03.385268   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:03.410330   46141 cri.go:89] found id: ""
	I1202 19:27:03.410343   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.410350   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:03.410355   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:03.410412   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:03.435485   46141 cri.go:89] found id: ""
	I1202 19:27:03.435499   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.435505   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:03.435511   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:03.435565   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:03.460310   46141 cri.go:89] found id: ""
	I1202 19:27:03.460323   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.460330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:03.460335   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:03.460389   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:03.488041   46141 cri.go:89] found id: ""
	I1202 19:27:03.488054   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.488061   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:03.488066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:03.488120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:03.512748   46141 cri.go:89] found id: ""
	I1202 19:27:03.512761   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.512768   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:03.512776   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:03.512787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:03.523642   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:03.523658   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:03.617573   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.617591   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:03.617602   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:03.694365   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:03.694383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:03.726522   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:03.726537   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.302579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:06.312543   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:06.312604   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:06.337638   46141 cri.go:89] found id: ""
	I1202 19:27:06.337693   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.337700   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:06.337706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:06.337764   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:06.362621   46141 cri.go:89] found id: ""
	I1202 19:27:06.362634   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.362641   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:06.362646   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:06.362698   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:06.387105   46141 cri.go:89] found id: ""
	I1202 19:27:06.387121   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.387127   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:06.387133   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:06.387186   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:06.415681   46141 cri.go:89] found id: ""
	I1202 19:27:06.415694   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.415700   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:06.415706   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:06.415760   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:06.444254   46141 cri.go:89] found id: ""
	I1202 19:27:06.444267   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.444274   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:06.444279   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:06.444337   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:06.468778   46141 cri.go:89] found id: ""
	I1202 19:27:06.468791   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.468799   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:06.468805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:06.468859   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:06.493545   46141 cri.go:89] found id: ""
	I1202 19:27:06.493558   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.493564   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:06.493572   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:06.493583   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:06.567943   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:06.567953   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:06.567963   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:06.656325   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:06.656344   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:06.685907   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:06.685923   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.756875   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:06.756894   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.270257   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:09.280597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:09.280658   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:09.304838   46141 cri.go:89] found id: ""
	I1202 19:27:09.304856   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.304863   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:09.304872   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:09.304926   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:09.329409   46141 cri.go:89] found id: ""
	I1202 19:27:09.329422   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.329430   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:09.329435   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:09.329491   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:09.353934   46141 cri.go:89] found id: ""
	I1202 19:27:09.353948   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.353954   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:09.353960   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:09.354016   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:09.379084   46141 cri.go:89] found id: ""
	I1202 19:27:09.379098   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.379105   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:09.379111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:09.379166   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:09.404377   46141 cri.go:89] found id: ""
	I1202 19:27:09.404391   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.404398   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:09.404403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:09.404459   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:09.429248   46141 cri.go:89] found id: ""
	I1202 19:27:09.429262   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.429269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:09.429274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:09.429331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:09.453340   46141 cri.go:89] found id: ""
	I1202 19:27:09.453354   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.453360   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:09.453367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:09.453378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:09.519114   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:09.519131   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.530268   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:09.530282   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:09.622354   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:09.622364   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:09.622374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:09.698919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:09.698936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.231072   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:12.240732   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:12.240796   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:12.267547   46141 cri.go:89] found id: ""
	I1202 19:27:12.267560   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.267566   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:12.267572   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:12.267626   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:12.291129   46141 cri.go:89] found id: ""
	I1202 19:27:12.291143   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.291150   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:12.291155   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:12.291209   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:12.316228   46141 cri.go:89] found id: ""
	I1202 19:27:12.316242   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.316248   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:12.316253   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:12.316305   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:12.340306   46141 cri.go:89] found id: ""
	I1202 19:27:12.340319   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.340326   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:12.340331   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:12.340386   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:12.365210   46141 cri.go:89] found id: ""
	I1202 19:27:12.365224   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.365230   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:12.365239   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:12.365299   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:12.393299   46141 cri.go:89] found id: ""
	I1202 19:27:12.393312   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.393319   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:12.393327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:12.393387   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:12.418063   46141 cri.go:89] found id: ""
	I1202 19:27:12.418089   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.418096   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:12.418104   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:12.418114   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.450419   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:12.450434   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:12.520281   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:12.520300   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:12.531244   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:12.531260   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:12.614672   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:12.614681   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:12.614691   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.191935   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:15.202075   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:15.202136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:15.227991   46141 cri.go:89] found id: ""
	I1202 19:27:15.228004   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.228011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:15.228016   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:15.228073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:15.253837   46141 cri.go:89] found id: ""
	I1202 19:27:15.253850   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.253856   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:15.253861   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:15.253916   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:15.279658   46141 cri.go:89] found id: ""
	I1202 19:27:15.279671   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.279677   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:15.279682   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:15.279735   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:15.303415   46141 cri.go:89] found id: ""
	I1202 19:27:15.303429   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.303435   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:15.303440   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:15.303496   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:15.327738   46141 cri.go:89] found id: ""
	I1202 19:27:15.327752   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.327759   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:15.327764   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:15.327818   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:15.353097   46141 cri.go:89] found id: ""
	I1202 19:27:15.353110   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.353117   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:15.353122   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:15.353175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:15.377713   46141 cri.go:89] found id: ""
	I1202 19:27:15.377726   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.377734   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:15.377741   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:15.377751   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:15.443006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:15.443024   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:15.453500   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:15.453519   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:15.518415   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:15.518425   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:15.518438   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.596810   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:15.596828   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:18.130179   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:18.140204   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:18.140265   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:18.167800   46141 cri.go:89] found id: ""
	I1202 19:27:18.167814   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.167821   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:18.167826   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:18.167882   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:18.191990   46141 cri.go:89] found id: ""
	I1202 19:27:18.192003   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.192010   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:18.192015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:18.192072   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:18.216815   46141 cri.go:89] found id: ""
	I1202 19:27:18.216828   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.216835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:18.216840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:18.216894   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:18.240868   46141 cri.go:89] found id: ""
	I1202 19:27:18.240881   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.240888   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:18.240894   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:18.240950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:18.265457   46141 cri.go:89] found id: ""
	I1202 19:27:18.265470   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.265476   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:18.265482   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:18.265533   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:18.289248   46141 cri.go:89] found id: ""
	I1202 19:27:18.289262   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.289269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:18.289275   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:18.289339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:18.312672   46141 cri.go:89] found id: ""
	I1202 19:27:18.312685   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.312692   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:18.312700   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:18.312710   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:18.380764   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:18.380781   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:18.391485   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:18.391501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:18.453699   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:18.453709   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:18.453720   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:18.530116   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:18.530134   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.069567   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:21.079484   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:21.079550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:21.103488   46141 cri.go:89] found id: ""
	I1202 19:27:21.103503   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.103511   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:21.103517   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:21.103572   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:21.130794   46141 cri.go:89] found id: ""
	I1202 19:27:21.130807   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.130814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:21.130819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:21.130876   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:21.154925   46141 cri.go:89] found id: ""
	I1202 19:27:21.154940   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.154946   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:21.154952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:21.155008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:21.183874   46141 cri.go:89] found id: ""
	I1202 19:27:21.183887   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.183895   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:21.183900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:21.183956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:21.208723   46141 cri.go:89] found id: ""
	I1202 19:27:21.208736   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.208744   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:21.208750   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:21.208805   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:21.233965   46141 cri.go:89] found id: ""
	I1202 19:27:21.233978   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.233985   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:21.233990   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:21.234046   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:21.257686   46141 cri.go:89] found id: ""
	I1202 19:27:21.257699   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.257706   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:21.257714   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:21.257724   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:21.318236   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:21.318250   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:21.318261   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:21.395292   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:21.395310   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.422658   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:21.422674   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:21.489157   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:21.489174   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.001769   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:24.011691   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:24.011752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:24.042533   46141 cri.go:89] found id: ""
	I1202 19:27:24.042554   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.042561   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:24.042566   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:24.042624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:24.070666   46141 cri.go:89] found id: ""
	I1202 19:27:24.070679   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.070686   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:24.070691   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:24.070753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:24.095535   46141 cri.go:89] found id: ""
	I1202 19:27:24.095549   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.095556   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:24.095561   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:24.095619   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:24.123758   46141 cri.go:89] found id: ""
	I1202 19:27:24.123772   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.123779   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:24.123784   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:24.123838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:24.149095   46141 cri.go:89] found id: ""
	I1202 19:27:24.149108   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.149114   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:24.149120   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:24.149175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:24.174002   46141 cri.go:89] found id: ""
	I1202 19:27:24.174015   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.174022   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:24.174027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:24.174125   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:24.200105   46141 cri.go:89] found id: ""
	I1202 19:27:24.200119   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.200126   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:24.200133   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:24.200144   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:24.266202   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:24.266219   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.277238   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:24.277253   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:24.343395   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:24.343404   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:24.343414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:24.424919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:24.424936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:26.953925   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:26.963713   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:26.963769   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:26.988142   46141 cri.go:89] found id: ""
	I1202 19:27:26.988156   46141 logs.go:282] 0 containers: []
	W1202 19:27:26.988163   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:26.988168   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:26.988223   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:27.013673   46141 cri.go:89] found id: ""
	I1202 19:27:27.013687   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.013694   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:27.013699   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:27.013754   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:27.039371   46141 cri.go:89] found id: ""
	I1202 19:27:27.039384   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.039391   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:27.039396   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:27.039452   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:27.062786   46141 cri.go:89] found id: ""
	I1202 19:27:27.062800   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.062807   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:27.062812   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:27.062868   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:27.087058   46141 cri.go:89] found id: ""
	I1202 19:27:27.087072   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.087078   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:27.087083   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:27.087139   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:27.111397   46141 cri.go:89] found id: ""
	I1202 19:27:27.111410   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.111417   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:27.111422   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:27.111474   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:27.134753   46141 cri.go:89] found id: ""
	I1202 19:27:27.134774   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.134781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:27.134788   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:27.134798   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:27.200051   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:27.200069   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:27.210589   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:27.210603   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:27.274673   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:27.274684   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:27.274695   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:27.350589   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:27.350607   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:29.879009   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:29.888757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:29.888814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:29.914106   46141 cri.go:89] found id: ""
	I1202 19:27:29.914119   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.914126   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:29.914131   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:29.914198   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:29.945870   46141 cri.go:89] found id: ""
	I1202 19:27:29.945883   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.945890   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:29.945895   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:29.945951   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:29.972147   46141 cri.go:89] found id: ""
	I1202 19:27:29.972161   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.972168   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:29.972173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:29.972237   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:29.999569   46141 cri.go:89] found id: ""
	I1202 19:27:29.999583   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.999590   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:29.999595   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:29.999654   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:30.048258   46141 cri.go:89] found id: ""
	I1202 19:27:30.048273   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.048281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:30.048286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:30.048361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:30.083224   46141 cri.go:89] found id: ""
	I1202 19:27:30.083238   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.083245   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:30.083251   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:30.083308   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:30.113945   46141 cri.go:89] found id: ""
	I1202 19:27:30.113959   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.113966   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:30.113975   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:30.113986   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:30.192106   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:30.192125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:30.221887   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:30.221904   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:30.290188   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:30.290204   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:30.301167   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:30.301182   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:30.362881   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:32.863109   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:32.872876   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:32.872937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:32.897586   46141 cri.go:89] found id: ""
	I1202 19:27:32.897603   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.897610   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:32.897615   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:32.897706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:32.924245   46141 cri.go:89] found id: ""
	I1202 19:27:32.924258   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.924265   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:32.924270   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:32.924332   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:32.951911   46141 cri.go:89] found id: ""
	I1202 19:27:32.951925   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.951932   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:32.951938   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:32.951992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:32.975852   46141 cri.go:89] found id: ""
	I1202 19:27:32.975865   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.975872   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:32.975878   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:32.975933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:33.000511   46141 cri.go:89] found id: ""
	I1202 19:27:33.000525   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.000532   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:33.000537   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:33.000591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:33.030910   46141 cri.go:89] found id: ""
	I1202 19:27:33.030924   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.030931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:33.030936   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:33.030993   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:33.055909   46141 cri.go:89] found id: ""
	I1202 19:27:33.055922   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.055929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:33.055937   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:33.055947   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:33.121449   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:33.121471   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:33.134922   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:33.134955   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:33.198500   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:33.198512   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:33.198524   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:33.275340   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:33.275358   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:35.803184   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:35.814556   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:35.814622   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:35.843911   46141 cri.go:89] found id: ""
	I1202 19:27:35.843927   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.843934   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:35.843939   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:35.844010   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:35.872792   46141 cri.go:89] found id: ""
	I1202 19:27:35.872807   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.872814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:35.872819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:35.872885   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:35.899563   46141 cri.go:89] found id: ""
	I1202 19:27:35.899576   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.899583   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:35.899588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:35.899642   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:35.929110   46141 cri.go:89] found id: ""
	I1202 19:27:35.929133   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.929141   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:35.929147   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:35.929214   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:35.953603   46141 cri.go:89] found id: ""
	I1202 19:27:35.953617   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.953624   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:35.953629   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:35.953706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:35.978487   46141 cri.go:89] found id: ""
	I1202 19:27:35.978501   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.978508   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:35.978513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:35.978571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:36.002610   46141 cri.go:89] found id: ""
	I1202 19:27:36.002623   46141 logs.go:282] 0 containers: []
	W1202 19:27:36.002629   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:36.002636   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:36.002647   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:36.078660   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:36.078679   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:36.108572   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:36.108589   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:36.174842   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:36.174858   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:36.185725   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:36.185740   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:36.248843   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:38.749933   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:38.759902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:38.759959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:38.784371   46141 cri.go:89] found id: ""
	I1202 19:27:38.784384   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.784390   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:38.784396   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:38.784449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:38.813903   46141 cri.go:89] found id: ""
	I1202 19:27:38.813918   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.813925   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:38.813930   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:38.813986   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:38.847704   46141 cri.go:89] found id: ""
	I1202 19:27:38.847718   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.847724   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:38.847730   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:38.847786   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:38.874126   46141 cri.go:89] found id: ""
	I1202 19:27:38.874139   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.874146   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:38.874151   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:38.874204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:38.899808   46141 cri.go:89] found id: ""
	I1202 19:27:38.899822   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.899829   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:38.899835   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:38.899890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:38.924777   46141 cri.go:89] found id: ""
	I1202 19:27:38.924791   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.924798   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:38.924804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:38.924898   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:38.949761   46141 cri.go:89] found id: ""
	I1202 19:27:38.949774   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.949781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:38.949788   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:38.949802   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:39.008770   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:39.008780   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:39.008794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:39.090107   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:39.090125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:39.122398   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:39.122414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:39.187817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:39.187833   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.698611   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:41.708767   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:41.708837   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:41.733990   46141 cri.go:89] found id: ""
	I1202 19:27:41.734004   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.734011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:41.734017   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:41.734080   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:41.759279   46141 cri.go:89] found id: ""
	I1202 19:27:41.759293   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.759299   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:41.759305   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:41.759359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:41.793259   46141 cri.go:89] found id: ""
	I1202 19:27:41.793272   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.793278   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:41.793284   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:41.793339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:41.821458   46141 cri.go:89] found id: ""
	I1202 19:27:41.821471   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.821484   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:41.821489   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:41.821545   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:41.849637   46141 cri.go:89] found id: ""
	I1202 19:27:41.849670   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.849678   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:41.849683   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:41.849743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:41.881100   46141 cri.go:89] found id: ""
	I1202 19:27:41.881113   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.881121   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:41.881127   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:41.881189   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:41.906054   46141 cri.go:89] found id: ""
	I1202 19:27:41.906067   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.906074   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:41.906082   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:41.906092   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.916746   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:41.916761   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:41.979747   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:41.979757   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:41.979767   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:42.054766   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:42.054787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:42.086163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:42.086187   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.697773   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:44.707597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:44.707659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:44.733158   46141 cri.go:89] found id: ""
	I1202 19:27:44.733184   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.733191   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:44.733196   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:44.733261   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:44.757757   46141 cri.go:89] found id: ""
	I1202 19:27:44.757771   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.757778   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:44.757784   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:44.757843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:44.783874   46141 cri.go:89] found id: ""
	I1202 19:27:44.783888   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.783897   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:44.783902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:44.783959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:44.816248   46141 cri.go:89] found id: ""
	I1202 19:27:44.816261   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.816268   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:44.816273   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:44.816327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:44.847419   46141 cri.go:89] found id: ""
	I1202 19:27:44.847433   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.847440   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:44.847445   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:44.847504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:44.873837   46141 cri.go:89] found id: ""
	I1202 19:27:44.873851   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.873858   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:44.873863   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:44.873918   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:44.897843   46141 cri.go:89] found id: ""
	I1202 19:27:44.897856   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.897863   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:44.897871   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:44.897881   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.966499   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:44.966516   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:44.978644   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:44.978659   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:45.054728   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:45.054738   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:45.054765   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:45.162639   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:45.162660   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.718000   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:47.727890   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:47.727953   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:47.752168   46141 cri.go:89] found id: ""
	I1202 19:27:47.752181   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.752188   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:47.752193   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:47.752253   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:47.776058   46141 cri.go:89] found id: ""
	I1202 19:27:47.776071   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.776078   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:47.776086   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:47.776143   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:47.809050   46141 cri.go:89] found id: ""
	I1202 19:27:47.809065   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.809072   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:47.809078   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:47.809142   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:47.851196   46141 cri.go:89] found id: ""
	I1202 19:27:47.851209   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.851222   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:47.851227   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:47.851285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:47.877019   46141 cri.go:89] found id: ""
	I1202 19:27:47.877033   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.877039   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:47.877045   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:47.877104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:47.906595   46141 cri.go:89] found id: ""
	I1202 19:27:47.906609   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.906616   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:47.906621   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:47.906684   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:47.931137   46141 cri.go:89] found id: ""
	I1202 19:27:47.931150   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.931157   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:47.931165   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:47.931175   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.960778   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:47.960794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:48.026698   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:48.026716   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:48.039024   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:48.039040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:48.104995   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:48.105014   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:48.105026   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:50.681972   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:50.691952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:50.692008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:50.716419   46141 cri.go:89] found id: ""
	I1202 19:27:50.716432   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.716438   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:50.716443   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:50.716497   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:50.743698   46141 cri.go:89] found id: ""
	I1202 19:27:50.743712   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.743718   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:50.743723   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:50.743778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:50.768264   46141 cri.go:89] found id: ""
	I1202 19:27:50.768277   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.768283   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:50.768297   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:50.768354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:50.794403   46141 cri.go:89] found id: ""
	I1202 19:27:50.794428   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.794436   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:50.794441   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:50.794504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:50.820731   46141 cri.go:89] found id: ""
	I1202 19:27:50.820745   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.820752   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:50.820757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:50.820812   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:50.852081   46141 cri.go:89] found id: ""
	I1202 19:27:50.852094   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.852101   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:50.852106   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:50.852172   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:50.879611   46141 cri.go:89] found id: ""
	I1202 19:27:50.879625   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.879631   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:50.879644   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:50.879654   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:50.906936   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:50.906951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:50.975206   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:50.975223   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:50.985872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:50.985895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:51.052846   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:51.052855   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:51.052866   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:53.628857   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:53.638710   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:53.638773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:53.662581   46141 cri.go:89] found id: ""
	I1202 19:27:53.662595   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.662602   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:53.662607   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:53.662660   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:53.687222   46141 cri.go:89] found id: ""
	I1202 19:27:53.687237   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.687244   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:53.687249   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:53.687306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:53.711983   46141 cri.go:89] found id: ""
	I1202 19:27:53.711996   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.712003   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:53.712009   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:53.712065   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:53.737377   46141 cri.go:89] found id: ""
	I1202 19:27:53.737391   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.737398   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:53.737403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:53.737456   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:53.765301   46141 cri.go:89] found id: ""
	I1202 19:27:53.765315   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.765321   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:53.765327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:53.765383   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:53.793518   46141 cri.go:89] found id: ""
	I1202 19:27:53.793531   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.793537   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:53.793542   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:53.793597   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:53.822849   46141 cri.go:89] found id: ""
	I1202 19:27:53.822863   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.822870   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:53.822877   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:53.822887   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:53.854992   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:53.855010   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:53.921075   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:53.921094   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:53.931936   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:53.931951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:53.995407   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:53.995422   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:53.995432   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.577211   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:56.588419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:56.588476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:56.617070   46141 cri.go:89] found id: ""
	I1202 19:27:56.617083   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.617090   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:56.617096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:56.617149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:56.644965   46141 cri.go:89] found id: ""
	I1202 19:27:56.644979   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.644986   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:56.644990   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:56.645050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:56.673885   46141 cri.go:89] found id: ""
	I1202 19:27:56.673899   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.673906   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:56.673911   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:56.673965   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:56.698577   46141 cri.go:89] found id: ""
	I1202 19:27:56.698590   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.698597   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:56.698603   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:56.698659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:56.727980   46141 cri.go:89] found id: ""
	I1202 19:27:56.727995   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.728001   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:56.728007   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:56.728061   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:56.752295   46141 cri.go:89] found id: ""
	I1202 19:27:56.752309   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.752316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:56.752321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:56.752378   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:56.777216   46141 cri.go:89] found id: ""
	I1202 19:27:56.777228   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.777236   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:56.777243   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:56.777254   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:56.788028   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:56.788043   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:56.868442   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:56.868452   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:56.868462   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.944462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:56.944480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:56.979950   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:56.979964   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:59.548516   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:59.558289   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:59.558346   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:59.581971   46141 cri.go:89] found id: ""
	I1202 19:27:59.581984   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.581991   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:59.581997   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:59.582054   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:59.606472   46141 cri.go:89] found id: ""
	I1202 19:27:59.606485   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.606492   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:59.606497   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:59.606551   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:59.631964   46141 cri.go:89] found id: ""
	I1202 19:27:59.631977   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.631984   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:59.631989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:59.632042   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:59.657151   46141 cri.go:89] found id: ""
	I1202 19:27:59.657164   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.657171   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:59.657177   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:59.657232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:59.683812   46141 cri.go:89] found id: ""
	I1202 19:27:59.683826   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.683834   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:59.683840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:59.683901   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:59.712800   46141 cri.go:89] found id: ""
	I1202 19:27:59.712814   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.712821   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:59.712826   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:59.712900   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:59.745829   46141 cri.go:89] found id: ""
	I1202 19:27:59.745842   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.745849   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:59.745856   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:59.745868   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:59.817077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:59.817087   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:59.817097   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:59.907455   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:59.907474   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:59.935466   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:59.935480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:00.005487   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:00.005511   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.519937   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:02.529900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:02.529967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:02.555080   46141 cri.go:89] found id: ""
	I1202 19:28:02.555093   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.555099   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:02.555105   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:02.555160   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:02.579988   46141 cri.go:89] found id: ""
	I1202 19:28:02.580002   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.580009   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:02.580015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:02.580069   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:02.604847   46141 cri.go:89] found id: ""
	I1202 19:28:02.604861   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.604868   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:02.604874   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:02.604937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:02.629805   46141 cri.go:89] found id: ""
	I1202 19:28:02.629818   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.629825   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:02.629832   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:02.629888   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:02.654310   46141 cri.go:89] found id: ""
	I1202 19:28:02.654324   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.654330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:02.654336   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:02.654393   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:02.683226   46141 cri.go:89] found id: ""
	I1202 19:28:02.683239   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.683246   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:02.683252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:02.683306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:02.707703   46141 cri.go:89] found id: ""
	I1202 19:28:02.707717   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.707724   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:02.707732   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:02.707741   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:02.783085   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:02.783103   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:02.829513   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:02.829528   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:02.903215   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:02.903231   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.914284   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:02.914302   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:02.974963   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.475826   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:05.485953   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:05.486009   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:05.512427   46141 cri.go:89] found id: ""
	I1202 19:28:05.512440   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.512447   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:05.512453   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:05.512509   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:05.536678   46141 cri.go:89] found id: ""
	I1202 19:28:05.536691   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.536698   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:05.536703   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:05.536757   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:05.561732   46141 cri.go:89] found id: ""
	I1202 19:28:05.561745   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.561752   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:05.561757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:05.561810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:05.585989   46141 cri.go:89] found id: ""
	I1202 19:28:05.586003   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.586010   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:05.586015   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:05.586073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:05.611860   46141 cri.go:89] found id: ""
	I1202 19:28:05.611891   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.611899   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:05.611904   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:05.611969   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:05.637502   46141 cri.go:89] found id: ""
	I1202 19:28:05.637516   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.637523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:05.637528   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:05.637583   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:05.662486   46141 cri.go:89] found id: ""
	I1202 19:28:05.662499   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.662506   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:05.662514   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:05.662525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:05.727597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:05.727615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:05.738294   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:05.738309   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:05.810066   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.810076   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:05.810088   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:05.892482   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:05.892506   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:08.423125   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:08.433033   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:08.433090   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:08.458175   46141 cri.go:89] found id: ""
	I1202 19:28:08.458189   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.458195   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:08.458201   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:08.458257   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:08.483893   46141 cri.go:89] found id: ""
	I1202 19:28:08.483906   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.483913   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:08.483918   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:08.483974   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:08.507923   46141 cri.go:89] found id: ""
	I1202 19:28:08.507937   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.507953   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:08.507964   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:08.508081   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:08.537015   46141 cri.go:89] found id: ""
	I1202 19:28:08.537030   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.537041   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:08.537046   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:08.537102   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:08.562386   46141 cri.go:89] found id: ""
	I1202 19:28:08.562399   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.562405   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:08.562410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:08.562464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:08.589367   46141 cri.go:89] found id: ""
	I1202 19:28:08.589380   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.589387   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:08.589392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:08.589446   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:08.614763   46141 cri.go:89] found id: ""
	I1202 19:28:08.614776   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.614782   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:08.614790   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:08.614806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:08.680003   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:08.680020   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:08.691092   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:08.691108   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:08.758435   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:08.758444   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:08.758455   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:08.838206   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:08.838225   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.377402   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:11.387381   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:11.387443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:11.416000   46141 cri.go:89] found id: ""
	I1202 19:28:11.416013   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.416020   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:11.416025   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:11.416086   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:11.440887   46141 cri.go:89] found id: ""
	I1202 19:28:11.440900   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.440907   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:11.440913   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:11.440980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:11.469507   46141 cri.go:89] found id: ""
	I1202 19:28:11.469520   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.469527   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:11.469533   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:11.469589   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:11.494304   46141 cri.go:89] found id: ""
	I1202 19:28:11.494324   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.494331   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:11.494337   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:11.494395   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:11.519823   46141 cri.go:89] found id: ""
	I1202 19:28:11.519836   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.519843   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:11.519848   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:11.519905   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:11.544959   46141 cri.go:89] found id: ""
	I1202 19:28:11.544972   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.544980   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:11.544985   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:11.545043   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:11.569409   46141 cri.go:89] found id: ""
	I1202 19:28:11.569422   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.569429   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:11.569437   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:11.569449   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.605867   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:11.605883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:11.672817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:11.672835   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:11.683920   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:11.683937   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:11.748483   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:11.748494   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:11.748505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:14.328100   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:14.338319   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:14.338385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:14.368273   46141 cri.go:89] found id: ""
	I1202 19:28:14.368287   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.368293   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:14.368299   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:14.368353   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:14.393695   46141 cri.go:89] found id: ""
	I1202 19:28:14.393708   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.393715   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:14.393720   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:14.393778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:14.419532   46141 cri.go:89] found id: ""
	I1202 19:28:14.419546   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.419552   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:14.419558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:14.419611   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:14.444792   46141 cri.go:89] found id: ""
	I1202 19:28:14.444806   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.444812   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:14.444818   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:14.444874   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:14.473002   46141 cri.go:89] found id: ""
	I1202 19:28:14.473015   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.473022   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:14.473027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:14.473082   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:14.500557   46141 cri.go:89] found id: ""
	I1202 19:28:14.500570   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.500577   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:14.500583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:14.500639   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:14.531570   46141 cri.go:89] found id: ""
	I1202 19:28:14.531583   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.531591   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:14.531598   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:14.531608   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:14.563367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:14.563385   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:14.629330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:14.629348   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:14.640467   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:14.640482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:14.703192   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:14.703201   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:14.703212   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.280934   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:17.290754   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:17.290816   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:17.315632   46141 cri.go:89] found id: ""
	I1202 19:28:17.315645   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.315652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:17.315657   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:17.315715   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:17.339240   46141 cri.go:89] found id: ""
	I1202 19:28:17.339256   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.339281   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:17.339304   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:17.339361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:17.362387   46141 cri.go:89] found id: ""
	I1202 19:28:17.362401   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.362408   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:17.362415   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:17.362471   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:17.388183   46141 cri.go:89] found id: ""
	I1202 19:28:17.388197   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.388204   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:17.388209   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:17.388264   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:17.417561   46141 cri.go:89] found id: ""
	I1202 19:28:17.417575   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.417582   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:17.417588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:17.417643   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:17.441561   46141 cri.go:89] found id: ""
	I1202 19:28:17.441574   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.441581   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:17.441596   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:17.441678   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:17.467464   46141 cri.go:89] found id: ""
	I1202 19:28:17.467477   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.467483   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:17.467491   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:17.467501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.543368   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:17.543386   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:17.574792   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:17.574807   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:17.641345   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:17.641363   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:17.651872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:17.651892   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:17.719233   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.219437   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:20.229376   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:20.229437   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:20.254960   46141 cri.go:89] found id: ""
	I1202 19:28:20.254973   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.254980   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:20.254985   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:20.255048   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:20.280663   46141 cri.go:89] found id: ""
	I1202 19:28:20.280676   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.280683   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:20.280688   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:20.280744   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:20.309275   46141 cri.go:89] found id: ""
	I1202 19:28:20.309288   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.309295   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:20.309300   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:20.309354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:20.334255   46141 cri.go:89] found id: ""
	I1202 19:28:20.334268   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.334275   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:20.334281   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:20.334334   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:20.359290   46141 cri.go:89] found id: ""
	I1202 19:28:20.359303   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.359310   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:20.359330   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:20.359385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:20.387906   46141 cri.go:89] found id: ""
	I1202 19:28:20.387919   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.387931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:20.387937   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:20.387995   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:20.412377   46141 cri.go:89] found id: ""
	I1202 19:28:20.412391   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.412398   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:20.412406   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:20.412421   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:20.478975   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:20.478994   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:20.491271   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:20.491286   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:20.559186   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.559197   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:20.559208   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:20.635117   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:20.635135   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:23.163845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:23.174025   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:23.174084   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:23.198952   46141 cri.go:89] found id: ""
	I1202 19:28:23.198965   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.198972   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:23.198977   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:23.199040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:23.227109   46141 cri.go:89] found id: ""
	I1202 19:28:23.227122   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.227128   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:23.227133   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:23.227194   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:23.252085   46141 cri.go:89] found id: ""
	I1202 19:28:23.252099   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.252106   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:23.252111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:23.252178   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:23.282041   46141 cri.go:89] found id: ""
	I1202 19:28:23.282054   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.282061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:23.282066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:23.282120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:23.306149   46141 cri.go:89] found id: ""
	I1202 19:28:23.306163   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.306170   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:23.306176   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:23.306231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:23.330130   46141 cri.go:89] found id: ""
	I1202 19:28:23.330143   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.330158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:23.330165   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:23.330232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:23.354289   46141 cri.go:89] found id: ""
	I1202 19:28:23.354303   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.354309   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:23.354317   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:23.354327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:23.421463   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:23.421481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:23.432425   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:23.432442   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:23.499162   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:23.499185   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:23.499198   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:23.574769   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:23.574787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.102251   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:26.112999   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:26.113059   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:26.139511   46141 cri.go:89] found id: ""
	I1202 19:28:26.139527   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.139534   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:26.139539   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:26.139595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:26.163810   46141 cri.go:89] found id: ""
	I1202 19:28:26.163823   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.163830   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:26.163845   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:26.163903   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:26.195678   46141 cri.go:89] found id: ""
	I1202 19:28:26.195691   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.195716   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:26.195721   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:26.195784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:26.221498   46141 cri.go:89] found id: ""
	I1202 19:28:26.221512   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.221519   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:26.221524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:26.221591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:26.246377   46141 cri.go:89] found id: ""
	I1202 19:28:26.246391   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.246397   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:26.246402   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:26.246464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:26.270652   46141 cri.go:89] found id: ""
	I1202 19:28:26.270665   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.270673   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:26.270678   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:26.270763   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:26.296694   46141 cri.go:89] found id: ""
	I1202 19:28:26.296707   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.296714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:26.296722   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:26.296735   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:26.371620   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:26.371631   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:26.371641   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:26.451711   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:26.451734   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.483175   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:26.483191   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:26.549681   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:26.549701   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.061808   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:29.072772   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:29.072827   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:29.101985   46141 cri.go:89] found id: ""
	I1202 19:28:29.101999   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.102006   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:29.102013   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:29.102074   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:29.128784   46141 cri.go:89] found id: ""
	I1202 19:28:29.128797   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.128803   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:29.128808   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:29.128862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:29.156726   46141 cri.go:89] found id: ""
	I1202 19:28:29.156740   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.156747   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:29.156753   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:29.156810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:29.186146   46141 cri.go:89] found id: ""
	I1202 19:28:29.186159   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.186167   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:29.186173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:29.186230   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:29.210367   46141 cri.go:89] found id: ""
	I1202 19:28:29.210381   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.210387   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:29.210392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:29.210448   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:29.234607   46141 cri.go:89] found id: ""
	I1202 19:28:29.234620   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.234635   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:29.234641   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:29.234695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:29.260124   46141 cri.go:89] found id: ""
	I1202 19:28:29.260137   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.260144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:29.260151   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:29.260161   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.270869   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:29.270885   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:29.335425   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:29.335435   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:29.335448   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:29.416026   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:29.416053   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:29.444738   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:29.444757   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:32.015450   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:32.028692   46141 kubeadm.go:602] duration metric: took 4m2.303606504s to restartPrimaryControlPlane
	W1202 19:28:32.028752   46141 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 19:28:32.028882   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:28:32.448460   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:28:32.461105   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:28:32.468953   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:28:32.469018   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:28:32.476620   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:28:32.476629   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:28:32.476680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:28:32.484342   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:28:32.484396   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:28:32.491816   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:28:32.499468   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:28:32.499526   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:28:32.506680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.513998   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:28:32.514056   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.521915   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:28:32.529746   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:28:32.529813   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:28:32.537427   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:28:32.575514   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:28:32.575563   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:28:32.649801   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:28:32.649866   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:28:32.649900   46141 kubeadm.go:319] OS: Linux
	I1202 19:28:32.649943   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:28:32.649990   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:28:32.650036   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:28:32.650083   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:28:32.650129   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:28:32.650176   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:28:32.650220   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:28:32.650266   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:28:32.650311   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:28:32.711361   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:28:32.711478   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:28:32.711574   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:28:32.719716   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:28:32.725408   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:28:32.725506   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:28:32.725580   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:28:32.725675   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:28:32.725741   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:28:32.725818   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:28:32.725877   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:28:32.725939   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:28:32.726006   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:28:32.726085   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:28:32.726169   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:28:32.726206   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:28:32.726266   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:28:32.962990   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:28:33.139589   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:28:33.816592   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:28:34.040085   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:28:34.279545   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:28:34.280074   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:28:34.282763   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:28:34.285708   46141 out.go:252]   - Booting up control plane ...
	I1202 19:28:34.285809   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:28:34.285891   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:28:34.288012   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:28:34.303407   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:28:34.303530   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:28:34.311292   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:28:34.311561   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:28:34.311687   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:28:34.441389   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:28:34.442903   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:32:34.442631   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443729s
	I1202 19:32:34.442655   46141 kubeadm.go:319] 
	I1202 19:32:34.442716   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:32:34.442751   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:32:34.442868   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:32:34.442876   46141 kubeadm.go:319] 
	I1202 19:32:34.443019   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:32:34.443050   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:32:34.443105   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:32:34.443119   46141 kubeadm.go:319] 
	I1202 19:32:34.446600   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:32:34.447010   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:32:34.447116   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:32:34.447358   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:32:34.447364   46141 kubeadm.go:319] 
	I1202 19:32:34.447431   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 19:32:34.447530   46141 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443729s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 19:32:34.447615   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:32:34.857158   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:32:34.869767   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:32:34.869822   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:32:34.877453   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:32:34.877463   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:32:34.877520   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:32:34.885001   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:32:34.885057   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:32:34.892315   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:32:34.899801   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:32:34.899854   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:32:34.907104   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.914843   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:32:34.914905   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.922357   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:32:34.930005   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:32:34.930062   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:32:34.937883   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:32:34.977710   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:32:34.977941   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:32:35.052803   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:32:35.052872   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:32:35.052916   46141 kubeadm.go:319] OS: Linux
	I1202 19:32:35.052967   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:32:35.053025   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:32:35.053081   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:32:35.053132   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:32:35.053189   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:32:35.053247   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:32:35.053296   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:32:35.053361   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:32:35.053405   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:32:35.129057   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:32:35.129160   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:32:35.129249   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:32:35.136437   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:32:35.141766   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:32:35.141858   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:32:35.141951   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:32:35.142045   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:32:35.142120   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:32:35.142195   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:32:35.142254   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:32:35.142330   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:32:35.142391   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:32:35.142465   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:32:35.142537   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:32:35.142573   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:32:35.142628   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:32:35.719108   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:32:35.855328   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:32:36.315829   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:32:36.611755   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:32:36.762758   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:32:36.763311   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:32:36.766390   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:32:36.769564   46141 out.go:252]   - Booting up control plane ...
	I1202 19:32:36.769677   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:32:36.769754   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:32:36.771251   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:32:36.785826   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:32:36.785928   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:32:36.793103   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:32:36.793426   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:32:36.793594   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:32:36.913663   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:32:36.913775   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:36:36.914797   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001215513s
	I1202 19:36:36.914820   46141 kubeadm.go:319] 
	I1202 19:36:36.914918   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:36:36.915114   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:36:36.915295   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:36:36.915303   46141 kubeadm.go:319] 
	I1202 19:36:36.915482   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:36:36.915772   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:36:36.915825   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:36:36.915828   46141 kubeadm.go:319] 
	I1202 19:36:36.923850   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:36:36.924318   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:36:36.924432   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:36:36.924695   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:36:36.924703   46141 kubeadm.go:319] 
	I1202 19:36:36.924833   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 19:36:36.924858   46141 kubeadm.go:403] duration metric: took 12m7.236978439s to StartCluster
	I1202 19:36:36.924902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:36:36.924959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:36:36.952746   46141 cri.go:89] found id: ""
	I1202 19:36:36.952760   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.952767   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:36:36.952772   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:36:36.952828   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:36:36.977200   46141 cri.go:89] found id: ""
	I1202 19:36:36.977214   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.977221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:36:36.977226   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:36:36.977291   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:36:37.002232   46141 cri.go:89] found id: ""
	I1202 19:36:37.002246   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.002253   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:36:37.002258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:36:37.002321   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:36:37.051601   46141 cri.go:89] found id: ""
	I1202 19:36:37.051615   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.051621   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:36:37.051626   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:36:37.051681   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:36:37.102950   46141 cri.go:89] found id: ""
	I1202 19:36:37.102976   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.102983   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:36:37.102988   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:36:37.103051   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:36:37.131342   46141 cri.go:89] found id: ""
	I1202 19:36:37.131355   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.131362   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:36:37.131368   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:36:37.131423   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:36:37.159192   46141 cri.go:89] found id: ""
	I1202 19:36:37.159206   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.159213   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:36:37.159221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:36:37.159234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:36:37.170095   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:36:37.170110   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:36:37.234222   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:36:37.234232   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:36:37.234242   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:36:37.306216   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:36:37.306234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:36:37.334163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:36:37.334178   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 19:36:37.399997   46141 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 19:36:37.400040   46141 out.go:285] * 
	W1202 19:36:37.400110   46141 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.400129   46141 out.go:285] * 
	W1202 19:36:37.402271   46141 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:36:37.407816   46141 out.go:203] 
	W1202 19:36:37.411562   46141 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.411641   46141 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 19:36:37.411664   46141 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 19:36:37.415811   46141 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546654939Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546834414Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546950457Z" level=info msg="Create NRI interface"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.5471107Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547130474Z" level=info msg="runtime interface created"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.54714466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547151634Z" level=info msg="runtime interface starting up..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547157616Z" level=info msg="starting plugins..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547170686Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547251727Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:24:28 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.715009926Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bc19958f-d803-4cd2-a545-4f6c118c1f40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716039792Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=97921bbe-b2e3-494c-be19-702e5072b6db name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716591601Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=702ce713-4736-4f82-bd4c-9fc9629fcb4d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717128034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5900f7cc-9a33-4e7a-8a73-829e63e64047 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717627973Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0a3735ac-393a-45fe-a0d5-34b181ae2dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718273997Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4854b9da-7f98-4e1b-9a6a-97fc85aeb622 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718754056Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f046502e-805f-4087-97ee-276ea86f9117 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.132448562Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bfb0729f-fcf5-4cf1-8661-79e44060815d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133109196Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=59868b2f-ef1f-42db-9580-1c52177e5173 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133599056Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0dadf3fc-12a7-405c-8560-5fb835ac24e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134131974Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f3eedcce-a194-4413-8ad5-a61c4ca64183 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134584067Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0d9672a7-dea9-4cd7-b618-4662ee6fbedc name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135094472Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=61806fbf-e06a-40e0-ab81-3632b0f3ac8c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135559257Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e966dc55-aa48-4909-b2a5-1769d8bd5c4c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:40.894276   21955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:40.895092   21955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:40.896834   21955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:40.897158   21955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:40.898618   21955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:36:40 up  1:18,  0 user,  load average: 0.22, 0.21, 0.28
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:36:38 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:39 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 02 19:36:39 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:39 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:39 functional-374330 kubelet[21829]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:39 functional-374330 kubelet[21829]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:39 functional-374330 kubelet[21829]: E1202 19:36:39.365522   21829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:39 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:39 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 02 19:36:40 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:40 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:40 functional-374330 kubelet[21865]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:40 functional-374330 kubelet[21865]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:40 functional-374330 kubelet[21865]: E1202 19:36:40.169454   21865 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 02 19:36:40 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:40 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:36:40 functional-374330 kubelet[21947]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:40 functional-374330 kubelet[21947]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:36:40 functional-374330 kubelet[21947]: E1202 19:36:40.852061   21947 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:36:40 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (387.298277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-374330 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-374330 apply -f testdata/invalidsvc.yaml: exit status 1 (58.848321ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-374330 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-374330 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-374330 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-374330 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-374330 --alsologtostderr -v=1] stderr:
I1202 19:38:53.460094   64525 out.go:360] Setting OutFile to fd 1 ...
I1202 19:38:53.460245   64525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:53.460262   64525 out.go:374] Setting ErrFile to fd 2...
I1202 19:38:53.460281   64525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:53.460673   64525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:38:53.461040   64525 mustload.go:66] Loading cluster: functional-374330
I1202 19:38:53.461776   64525 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:53.462488   64525 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:38:53.480578   64525 host.go:66] Checking if "functional-374330" exists ...
I1202 19:38:53.480901   64525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 19:38:53.534010   64525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.525118411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 19:38:53.534130   64525 api_server.go:166] Checking apiserver status ...
I1202 19:38:53.534195   64525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 19:38:53.534239   64525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:38:53.564920   64525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
W1202 19:38:53.670582   64525 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 19:38:53.673787   64525 out.go:179] * The control-plane node functional-374330 apiserver is not running: (state=Stopped)
I1202 19:38:53.676613   64525 out.go:179]   To start a cluster, run: "minikube start -p functional-374330"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (307.464795ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount1 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount     │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount3 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh       │ functional-374330 ssh findmnt -T /mount1                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount     │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount2 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh       │ functional-374330 ssh findmnt -T /mount1                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh       │ functional-374330 ssh findmnt -T /mount2                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh       │ functional-374330 ssh findmnt -T /mount3                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ mount     │ -p functional-374330 --kill=true                                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh       │ functional-374330 ssh sudo systemctl is-active docker                                                                                                     │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh       │ functional-374330 ssh sudo systemctl is-active containerd                                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ image     │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image save kicbase/echo-server:functional-374330 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image rm kicbase/echo-server:functional-374330 --alsologtostderr                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image     │ functional-374330 image save --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ start     │ -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ start     │ -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ start     │ -p functional-374330 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-374330 --alsologtostderr -v=1                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:38:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:38:53.228034   64453 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:53.228160   64453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.228172   64453 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:53.228176   64453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.228427   64453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:53.228783   64453 out.go:368] Setting JSON to false
	I1202 19:38:53.229577   64453 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4872,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:38:53.229645   64453 start.go:143] virtualization:  
	I1202 19:38:53.232943   64453 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:38:53.235895   64453 notify.go:221] Checking for updates...
	I1202 19:38:53.236732   64453 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:38:53.240137   64453 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:38:53.242950   64453 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:38:53.245724   64453 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:38:53.248553   64453 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:38:53.251341   64453 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:38:53.254696   64453 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:53.255303   64453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:38:53.276424   64453 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:38:53.276532   64453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:53.344863   64453 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.336070516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:53.344964   64453 docker.go:319] overlay module found
	I1202 19:38:53.348139   64453 out.go:179] * Using the docker driver based on existing profile
	I1202 19:38:53.350863   64453 start.go:309] selected driver: docker
	I1202 19:38:53.350879   64453 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:53.351022   64453 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:38:53.351137   64453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:53.405866   64453 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.396812701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:53.406270   64453 cni.go:84] Creating CNI manager for ""
	I1202 19:38:53.406342   64453 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:38:53.406382   64453 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:53.409346   64453 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.777069151Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0b5acd9e-3dc2-4d8e-bdd5-4eea4b6dba9b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800434178Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.8005674Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800604847Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.623558249Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=c3d5d0bb-6081-41b1-93fe-5ad0cc5cb721 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647477581Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647625318Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647666326Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671059718Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671198462Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671239421Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.728365711Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=4bbd67ea-391b-43a5-b118-a6fcfbfb2e41 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.751873881Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752015472Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752053206Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778816904Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778943234Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778980575Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.556483837Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=cedd96f0-1f8c-4d01-a073-f9a1fec94943 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587720583Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587870527Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587912462Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615511735Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615657093Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615696862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:38:54.716844   24516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:54.717575   24516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:54.719329   24516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:54.719736   24516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:54.721243   24516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:38:54 up  1:21,  0 user,  load average: 0.61, 0.36, 0.33
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 02 19:38:52 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:52 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:52 functional-374330 kubelet[24384]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:52 functional-374330 kubelet[24384]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:52 functional-374330 kubelet[24384]: E1202 19:38:52.849961   24384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:53 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 02 19:38:53 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:53 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:53 functional-374330 kubelet[24404]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:53 functional-374330 kubelet[24404]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:53 functional-374330 kubelet[24404]: E1202 19:38:53.604910   24404 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:53 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:53 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:54 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 02 19:38:54 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:54 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:54 functional-374330 kubelet[24434]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:54 functional-374330 kubelet[24434]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:54 functional-374330 kubelet[24434]: E1202 19:38:54.352238   24434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:54 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:54 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (352.758859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 status: exit status 2 (317.620579ms)

                                                
                                                
-- stdout --
	functional-374330
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-374330 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (295.064902ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-374330 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 status -o json: exit status 2 (296.019398ms)

                                                
                                                
-- stdout --
	{"Name":"functional-374330","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-374330 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (298.454776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 logs -n 25: (1.102466466s)
E1202 19:38:46.177854    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-374330 service hello-node --url --format={{.IP}}                                                                                         │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ service │ functional-374330 service hello-node --url                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ license │                                                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001:/mount-9p --alsologtostderr -v=1              │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh -- ls -la /mount-9p                                                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh cat /mount-9p/test-1764704316591789491                                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh sudo umount -f /mount-9p                                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2052892142/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh -- ls -la /mount-9p                                                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh sudo umount -f /mount-9p                                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount1 --alsologtostderr -v=1                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount3 --alsologtostderr -v=1                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount1                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount2 --alsologtostderr -v=1                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount1                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh findmnt -T /mount2                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh findmnt -T /mount3                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ mount   │ -p functional-374330 --kill=true                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh sudo systemctl is-active docker                                                                                               │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh sudo systemctl is-active containerd                                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:24:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:24:25.235145   46141 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:24:25.235262   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235266   46141 out.go:374] Setting ErrFile to fd 2...
	I1202 19:24:25.235270   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235501   46141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:24:25.235832   46141 out.go:368] Setting JSON to false
	I1202 19:24:25.236657   46141 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4004,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:24:25.236712   46141 start.go:143] virtualization:  
	I1202 19:24:25.240137   46141 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:24:25.243026   46141 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:24:25.243116   46141 notify.go:221] Checking for updates...
	I1202 19:24:25.249453   46141 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:24:25.252235   46141 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:24:25.255042   46141 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:24:25.257985   46141 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:24:25.260839   46141 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:24:25.264178   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:25.264323   46141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:24:25.284942   46141 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:24:25.285038   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.377890   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.369067605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.377983   46141 docker.go:319] overlay module found
	I1202 19:24:25.380979   46141 out.go:179] * Using the docker driver based on existing profile
	I1202 19:24:25.383947   46141 start.go:309] selected driver: docker
	I1202 19:24:25.383955   46141 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.384041   46141 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:24:25.384143   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.448724   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.440009169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.449135   46141 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:24:25.449156   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:25.449204   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:25.449250   46141 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.452291   46141 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:24:25.455020   46141 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:24:25.457907   46141 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:24:25.460700   46141 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:24:25.460741   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:25.479854   46141 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:24:25.479865   46141 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:24:25.525268   46141 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:24:25.722344   46141 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:24:25.722516   46141 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:24:25.722575   46141 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722662   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:24:25.722674   46141 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.293µs
	I1202 19:24:25.722687   46141 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:24:25.722699   46141 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722728   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:24:25.722732   46141 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 34.97µs
	I1202 19:24:25.722737   46141 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722755   46141 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722765   46141 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:24:25.722787   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:24:25.722792   46141 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722800   46141 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.388µs
	I1202 19:24:25.722806   46141 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722816   46141 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722833   46141 start.go:364] duration metric: took 28.102µs to acquireMachinesLock for "functional-374330"
	I1202 19:24:25.722844   46141 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:24:25.722848   46141 fix.go:54] fixHost starting: 
	I1202 19:24:25.722868   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:24:25.722874   46141 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 59.51µs
	I1202 19:24:25.722879   46141 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722888   46141 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722914   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:24:25.722918   46141 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.859µs
	I1202 19:24:25.722926   46141 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722934   46141 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722961   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:24:25.722965   46141 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.041µs
	I1202 19:24:25.722969   46141 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:24:25.722984   46141 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723013   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:24:25.723018   46141 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.477µs
	I1202 19:24:25.723022   46141 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:24:25.723030   46141 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723054   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:24:25.723058   46141 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.956µs
	I1202 19:24:25.723062   46141 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:24:25.723069   46141 cache.go:87] Successfully saved all images to host disk.
	I1202 19:24:25.723135   46141 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:24:25.740024   46141 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:24:25.740043   46141 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:24:25.743422   46141 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:24:25.743444   46141 machine.go:94] provisionDockerMachine start ...
	I1202 19:24:25.743520   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.759952   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.760267   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.760274   46141 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:24:25.913242   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:25.913255   46141 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:24:25.913315   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.930816   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.931108   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.931116   46141 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:24:26.092717   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:26.092791   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.112703   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.112993   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.113006   46141 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:24:26.261761   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:24:26.261776   46141 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:24:26.261797   46141 ubuntu.go:190] setting up certificates
	I1202 19:24:26.261807   46141 provision.go:84] configureAuth start
	I1202 19:24:26.261862   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:26.279208   46141 provision.go:143] copyHostCerts
	I1202 19:24:26.279270   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:24:26.279282   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:24:26.279355   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:24:26.279450   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:24:26.279454   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:24:26.279478   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:24:26.279560   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:24:26.279563   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:24:26.279586   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:24:26.279633   46141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:24:26.509539   46141 provision.go:177] copyRemoteCerts
	I1202 19:24:26.509599   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:24:26.509644   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.526423   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:26.629290   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:24:26.645497   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:24:26.662152   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:24:26.678745   46141 provision.go:87] duration metric: took 416.916855ms to configureAuth
	I1202 19:24:26.678762   46141 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:24:26.678944   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:26.679035   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.696214   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.696565   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.696576   46141 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:24:27.030556   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:24:27.030570   46141 machine.go:97] duration metric: took 1.287120124s to provisionDockerMachine
	I1202 19:24:27.030580   46141 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:24:27.030591   46141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:24:27.030695   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:24:27.030734   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.047988   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.153876   46141 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:24:27.157492   46141 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:24:27.157509   46141 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:24:27.157519   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:24:27.157573   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:24:27.157644   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:24:27.157766   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:24:27.157814   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:24:27.165310   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:27.182588   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:24:27.199652   46141 start.go:296] duration metric: took 169.058439ms for postStartSetup
	I1202 19:24:27.199721   46141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:24:27.199772   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.216431   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.322237   46141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:24:27.326538   46141 fix.go:56] duration metric: took 1.603683597s for fixHost
	I1202 19:24:27.326551   46141 start.go:83] releasing machines lock for "functional-374330", held for 1.603712807s
	I1202 19:24:27.326613   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:27.342449   46141 ssh_runner.go:195] Run: cat /version.json
	I1202 19:24:27.342488   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.342715   46141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:24:27.342781   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.364991   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.373848   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.555572   46141 ssh_runner.go:195] Run: systemctl --version
	I1202 19:24:27.562641   46141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:24:27.610413   46141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:24:27.614481   46141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:24:27.614543   46141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:24:27.622250   46141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:24:27.622263   46141 start.go:496] detecting cgroup driver to use...
	I1202 19:24:27.622291   46141 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:24:27.622334   46141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:24:27.637407   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:24:27.650559   46141 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:24:27.650610   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:24:27.665862   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:24:27.678201   46141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:24:27.787007   46141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:24:27.899090   46141 docker.go:234] disabling docker service ...
	I1202 19:24:27.899177   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:24:27.914485   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:24:27.927681   46141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:24:28.045412   46141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:24:28.177124   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:24:28.189334   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:24:28.202961   46141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:24:28.203015   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.211343   46141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:24:28.211423   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.219933   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.227929   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.236036   46141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:24:28.243301   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.251359   46141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.259074   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.267235   46141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:24:28.274309   46141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:24:28.280789   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.409376   46141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:24:28.552601   46141 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:24:28.552676   46141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:24:28.556545   46141 start.go:564] Will wait 60s for crictl version
	I1202 19:24:28.556594   46141 ssh_runner.go:195] Run: which crictl
	I1202 19:24:28.560016   46141 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:24:28.584096   46141 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:24:28.584179   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.612035   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.644724   46141 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:24:28.647719   46141 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:24:28.663830   46141 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:24:28.670469   46141 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 19:24:28.673257   46141 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:24:28.673378   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:28.673715   46141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:24:28.712979   46141 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:24:28.712990   46141 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:24:28.712996   46141 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:24:28.713091   46141 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:24:28.713167   46141 ssh_runner.go:195] Run: crio config
	I1202 19:24:28.766896   46141 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 19:24:28.766918   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:28.766927   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:28.766941   46141 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:24:28.766963   46141 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:24:28.767080   46141 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:24:28.767147   46141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:24:28.774515   46141 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:24:28.774573   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:24:28.781818   46141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:24:28.793879   46141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:24:28.805690   46141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 19:24:28.818120   46141 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:24:28.821584   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.923612   46141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:24:29.044163   46141 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:24:29.044174   46141 certs.go:195] generating shared ca certs ...
	I1202 19:24:29.044188   46141 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:24:29.044325   46141 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:24:29.044362   46141 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:24:29.044367   46141 certs.go:257] generating profile certs ...
	I1202 19:24:29.044449   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:24:29.044505   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:24:29.044543   46141 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:24:29.044646   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:24:29.044677   46141 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:24:29.044683   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:24:29.044708   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:24:29.044730   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:24:29.044752   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:24:29.044793   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:29.045393   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:24:29.065539   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:24:29.085818   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:24:29.107933   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:24:29.124745   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:24:29.141714   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:24:29.158359   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:24:29.174925   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:24:29.191660   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:24:29.208637   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:24:29.226113   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:24:29.242250   46141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:24:29.254421   46141 ssh_runner.go:195] Run: openssl version
	I1202 19:24:29.260244   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:24:29.267946   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271417   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271472   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.312066   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:24:29.319673   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:24:29.327613   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331149   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331213   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.371529   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:24:29.378966   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:24:29.386811   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390484   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390535   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.430996   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:24:29.438578   46141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:24:29.442282   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:24:29.482760   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:24:29.523856   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:24:29.564389   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:24:29.604810   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:24:29.645380   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:24:29.687886   46141 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:29.687963   46141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:24:29.688021   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.717432   46141 cri.go:89] found id: ""
	I1202 19:24:29.717490   46141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:24:29.725067   46141 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:24:29.725077   46141 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:24:29.725126   46141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:24:29.732065   46141 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.732614   46141 kubeconfig.go:125] found "functional-374330" server: "https://192.168.49.2:8441"
	I1202 19:24:29.734000   46141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:24:29.741333   46141 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 19:09:53.796915722 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 19:24:28.810106590 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 19:24:29.741350   46141 kubeadm.go:1161] stopping kube-system containers ...
	I1202 19:24:29.741369   46141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 19:24:29.741422   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.768496   46141 cri.go:89] found id: ""
	I1202 19:24:29.768555   46141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 19:24:29.784309   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:24:29.792418   46141 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  2 19:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 19:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  2 19:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 19:14 /etc/kubernetes/scheduler.conf
	
	I1202 19:24:29.792472   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:24:29.800190   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:24:29.807339   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.807391   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:24:29.814250   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.821376   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.821427   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.828870   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:24:29.836580   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.836638   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:24:29.843919   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:24:29.851701   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:29.899912   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.003595   46141 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.103659313s)
	I1202 19:24:31.003654   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.210419   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.280327   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.324104   46141 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:24:31.324170   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:31.824388   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.324845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.825182   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.824654   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.325193   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.825112   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.324714   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.824303   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.324356   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.824683   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.324294   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.824358   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.324922   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.824376   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.324270   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.825008   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.324553   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.824838   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.325254   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.824311   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.324452   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.824362   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.325153   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.824379   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.324948   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.824287   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.325093   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.824914   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.324315   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.825135   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.324688   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.824319   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.325046   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.824341   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.324306   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.824985   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.324502   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.825062   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.325159   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.824329   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.324431   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.824365   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.324584   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.824229   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.324898   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.825268   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.324621   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.824623   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.325215   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.824326   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.324724   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.824643   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.325213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.824317   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.324263   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.824993   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.324689   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.824372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.324768   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.824973   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.324385   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.824324   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.325090   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.824792   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.825092   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.324727   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.825067   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.325261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.824374   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.825117   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.824931   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.824858   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.324555   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.824370   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.824824   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.325272   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.824975   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.324579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.824349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.324992   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.824471   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.325189   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.824307   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.324299   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.824860   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.324477   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.824853   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.324910   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.825002   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.324312   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.824665   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.324238   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.824261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.325216   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.824750   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.324310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.825285   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.325114   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.824701   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.324390   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.825161   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.325162   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.824364   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.324725   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.825185   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.324377   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.825213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.324403   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.824310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.324960   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.824818   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.325151   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.824591   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:31.324373   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:31.324449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:31.353616   46141 cri.go:89] found id: ""
	I1202 19:25:31.353629   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.353636   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:31.353642   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:31.353718   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:31.378636   46141 cri.go:89] found id: ""
	I1202 19:25:31.378649   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.378656   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:31.378661   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:31.378716   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:31.403292   46141 cri.go:89] found id: ""
	I1202 19:25:31.403305   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.403312   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:31.403317   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:31.403371   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:31.427054   46141 cri.go:89] found id: ""
	I1202 19:25:31.427067   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.427074   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:31.427079   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:31.427133   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:31.451516   46141 cri.go:89] found id: ""
	I1202 19:25:31.451529   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.451536   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:31.451541   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:31.451595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:31.474863   46141 cri.go:89] found id: ""
	I1202 19:25:31.474876   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.474889   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:31.474895   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:31.474967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:31.499414   46141 cri.go:89] found id: ""
	I1202 19:25:31.499427   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.499434   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:31.499442   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:31.499454   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:31.563997   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:31.564014   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:31.575066   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:31.575080   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:31.644130   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:31.644152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:31.644164   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:31.720566   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:31.720584   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:34.247873   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:34.257765   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:34.257820   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:34.284109   46141 cri.go:89] found id: ""
	I1202 19:25:34.284122   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.284129   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:34.284134   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:34.284185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:34.322934   46141 cri.go:89] found id: ""
	I1202 19:25:34.322947   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.322954   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:34.322959   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:34.323011   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:34.356765   46141 cri.go:89] found id: ""
	I1202 19:25:34.356778   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.356785   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:34.356790   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:34.356843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:34.383799   46141 cri.go:89] found id: ""
	I1202 19:25:34.383811   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.383818   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:34.383824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:34.383875   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:34.407104   46141 cri.go:89] found id: ""
	I1202 19:25:34.407117   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.407133   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:34.407139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:34.407207   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:34.431504   46141 cri.go:89] found id: ""
	I1202 19:25:34.431517   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.431523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:34.431529   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:34.431624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:34.459463   46141 cri.go:89] found id: ""
	I1202 19:25:34.459477   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.459484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:34.459492   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:34.459503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:34.524752   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:34.524770   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:34.537010   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:34.537025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:34.599686   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:34.599696   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:34.599708   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:34.676464   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:34.676483   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.209911   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:37.219636   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:37.219691   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:37.243765   46141 cri.go:89] found id: ""
	I1202 19:25:37.243778   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.243785   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:37.243790   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:37.243842   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:37.272015   46141 cri.go:89] found id: ""
	I1202 19:25:37.272028   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.272035   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:37.272040   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:37.272096   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:37.296807   46141 cri.go:89] found id: ""
	I1202 19:25:37.296819   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.296835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:37.296840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:37.296893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:37.327436   46141 cri.go:89] found id: ""
	I1202 19:25:37.327449   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.327456   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:37.327461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:37.327515   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:37.362906   46141 cri.go:89] found id: ""
	I1202 19:25:37.362919   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.362926   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:37.362931   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:37.362985   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:37.386876   46141 cri.go:89] found id: ""
	I1202 19:25:37.386889   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.386896   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:37.386902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:37.386976   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:37.410131   46141 cri.go:89] found id: ""
	I1202 19:25:37.410144   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.410151   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:37.410158   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:37.410169   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:37.420302   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:37.420317   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:37.483848   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:37.483857   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:37.483867   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:37.562871   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:37.562889   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.593595   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:37.593609   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.162349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:40.172453   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:40.172514   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:40.199726   46141 cri.go:89] found id: ""
	I1202 19:25:40.199756   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.199763   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:40.199768   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:40.199825   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:40.229015   46141 cri.go:89] found id: ""
	I1202 19:25:40.229029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.229037   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:40.229042   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:40.229097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:40.255016   46141 cri.go:89] found id: ""
	I1202 19:25:40.255029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.255036   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:40.255041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:40.255104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:40.280314   46141 cri.go:89] found id: ""
	I1202 19:25:40.280337   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.280343   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:40.280349   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:40.280409   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:40.317261   46141 cri.go:89] found id: ""
	I1202 19:25:40.317275   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.317281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:40.317286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:40.317351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:40.350568   46141 cri.go:89] found id: ""
	I1202 19:25:40.350581   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.350588   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:40.350602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:40.350655   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:40.376758   46141 cri.go:89] found id: ""
	I1202 19:25:40.376772   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.376786   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:40.376794   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:40.376805   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:40.452695   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:40.452719   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:40.478860   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:40.478875   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.558280   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:40.558307   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:40.569138   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:40.569159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:40.633967   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.135632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:43.145532   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:43.145592   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:43.170325   46141 cri.go:89] found id: ""
	I1202 19:25:43.170338   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.170345   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:43.170372   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:43.170432   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:43.194956   46141 cri.go:89] found id: ""
	I1202 19:25:43.194970   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.194977   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:43.194982   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:43.195039   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:43.221778   46141 cri.go:89] found id: ""
	I1202 19:25:43.221792   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.221800   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:43.221805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:43.221862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:43.248205   46141 cri.go:89] found id: ""
	I1202 19:25:43.248218   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.248225   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:43.248230   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:43.248283   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:43.275958   46141 cri.go:89] found id: ""
	I1202 19:25:43.275971   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.275979   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:43.275984   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:43.276040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:43.311994   46141 cri.go:89] found id: ""
	I1202 19:25:43.312006   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.312013   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:43.312018   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:43.312070   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:43.338867   46141 cri.go:89] found id: ""
	I1202 19:25:43.338881   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.338888   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:43.338896   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:43.338907   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:43.370951   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:43.370966   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:43.439006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:43.439023   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:43.449811   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:43.449827   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:43.523274   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.523283   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:43.523293   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.099316   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:46.109738   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:46.109799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:46.135973   46141 cri.go:89] found id: ""
	I1202 19:25:46.135986   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.135993   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:46.135998   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:46.136053   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:46.160433   46141 cri.go:89] found id: ""
	I1202 19:25:46.160447   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.160454   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:46.160459   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:46.160562   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:46.185345   46141 cri.go:89] found id: ""
	I1202 19:25:46.185358   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.185365   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:46.185371   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:46.185431   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:46.209708   46141 cri.go:89] found id: ""
	I1202 19:25:46.209721   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.209728   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:46.209733   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:46.209799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:46.234274   46141 cri.go:89] found id: ""
	I1202 19:25:46.234288   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.234294   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:46.234299   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:46.234363   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:46.259257   46141 cri.go:89] found id: ""
	I1202 19:25:46.259271   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.259277   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:46.259282   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:46.259336   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:46.282587   46141 cri.go:89] found id: ""
	I1202 19:25:46.282601   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.282607   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:46.282620   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:46.282630   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:46.360010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:46.360029   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:46.360040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.435864   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:46.435883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:46.464582   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:46.464597   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:46.531766   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:46.531784   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.042500   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:49.053773   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:49.053830   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:49.079262   46141 cri.go:89] found id: ""
	I1202 19:25:49.079276   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.079282   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:49.079288   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:49.079342   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:49.104725   46141 cri.go:89] found id: ""
	I1202 19:25:49.104738   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.104745   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:49.104759   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:49.104814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:49.133788   46141 cri.go:89] found id: ""
	I1202 19:25:49.133801   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.133808   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:49.133824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:49.133880   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:49.159349   46141 cri.go:89] found id: ""
	I1202 19:25:49.159371   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.159379   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:49.159384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:49.159443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:49.197548   46141 cri.go:89] found id: ""
	I1202 19:25:49.197562   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.197569   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:49.197574   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:49.197641   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:49.223472   46141 cri.go:89] found id: ""
	I1202 19:25:49.223485   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.223492   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:49.223498   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:49.223558   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:49.247894   46141 cri.go:89] found id: ""
	I1202 19:25:49.247921   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.247929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:49.247936   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:49.247949   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:49.331462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:49.331482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:49.370297   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:49.370316   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:49.439052   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:49.439071   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.449975   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:49.449991   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:49.513463   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.015209   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:52.026897   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:52.026956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:52.053387   46141 cri.go:89] found id: ""
	I1202 19:25:52.053401   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.053408   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:52.053416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:52.053475   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:52.079773   46141 cri.go:89] found id: ""
	I1202 19:25:52.079787   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.079793   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:52.079799   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:52.079854   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:52.107526   46141 cri.go:89] found id: ""
	I1202 19:25:52.107539   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.107546   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:52.107551   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:52.107610   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:52.134040   46141 cri.go:89] found id: ""
	I1202 19:25:52.134054   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.134061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:52.134066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:52.134124   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:52.160401   46141 cri.go:89] found id: ""
	I1202 19:25:52.160421   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.160445   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:52.160450   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:52.160512   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:52.186015   46141 cri.go:89] found id: ""
	I1202 19:25:52.186029   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.186035   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:52.186041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:52.186097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:52.211315   46141 cri.go:89] found id: ""
	I1202 19:25:52.211328   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.211335   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:52.211342   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:52.211352   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:52.281330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:52.281350   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:52.294618   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:52.294634   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:52.375867   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.375884   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:52.375895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:52.454410   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:52.454433   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:54.985073   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:54.997287   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:54.997351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:55.033193   46141 cri.go:89] found id: ""
	I1202 19:25:55.033207   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.033214   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:55.033220   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:55.033285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:55.059947   46141 cri.go:89] found id: ""
	I1202 19:25:55.059961   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.059968   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:55.059973   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:55.060032   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:55.089719   46141 cri.go:89] found id: ""
	I1202 19:25:55.089731   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.089738   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:55.089744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:55.089804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:55.116791   46141 cri.go:89] found id: ""
	I1202 19:25:55.116805   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.116811   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:55.116816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:55.116872   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:55.144575   46141 cri.go:89] found id: ""
	I1202 19:25:55.144589   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.144597   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:55.144602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:55.144663   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:55.170532   46141 cri.go:89] found id: ""
	I1202 19:25:55.170546   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.170553   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:55.170558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:55.170613   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:55.201295   46141 cri.go:89] found id: ""
	I1202 19:25:55.201309   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.201317   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:55.201324   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:55.201335   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:55.265951   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:55.265968   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:55.276457   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:55.276472   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:55.358449   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:55.358470   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:55.358481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:55.438382   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:55.438401   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:57.969884   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:57.980234   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:57.980287   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:58.005151   46141 cri.go:89] found id: ""
	I1202 19:25:58.005165   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.005172   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:58.005177   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:58.005234   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:58.032254   46141 cri.go:89] found id: ""
	I1202 19:25:58.032267   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.032274   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:58.032279   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:58.032338   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:58.058556   46141 cri.go:89] found id: ""
	I1202 19:25:58.058570   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.058578   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:58.058583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:58.058640   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:58.084123   46141 cri.go:89] found id: ""
	I1202 19:25:58.084136   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.084143   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:58.084148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:58.084204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:58.110792   46141 cri.go:89] found id: ""
	I1202 19:25:58.110806   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.110812   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:58.110820   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:58.110877   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:58.136499   46141 cri.go:89] found id: ""
	I1202 19:25:58.136512   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.136519   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:58.136524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:58.136585   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:58.162083   46141 cri.go:89] found id: ""
	I1202 19:25:58.162096   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.162104   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:58.162111   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:58.162121   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:58.223736   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:58.223745   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:58.223756   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:58.308033   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:58.308051   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:58.341126   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:58.341141   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:58.407826   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:58.407843   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:00.920333   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:00.930302   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:00.930359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:00.954390   46141 cri.go:89] found id: ""
	I1202 19:26:00.954404   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.954411   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:00.954416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:00.954483   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:00.980266   46141 cri.go:89] found id: ""
	I1202 19:26:00.980280   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.980287   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:00.980292   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:00.980360   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:01.008460   46141 cri.go:89] found id: ""
	I1202 19:26:01.008482   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.008488   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:01.008493   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:01.008547   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:01.036672   46141 cri.go:89] found id: ""
	I1202 19:26:01.036686   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.036692   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:01.036698   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:01.036753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:01.061548   46141 cri.go:89] found id: ""
	I1202 19:26:01.061562   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.061568   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:01.061573   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:01.061629   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:01.086617   46141 cri.go:89] found id: ""
	I1202 19:26:01.086631   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.086638   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:01.086643   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:01.086701   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:01.111676   46141 cri.go:89] found id: ""
	I1202 19:26:01.111690   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.111697   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:01.111704   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:01.111714   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:01.176991   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:01.177017   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:01.188305   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:01.188339   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:01.254955   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:01.254966   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:01.254977   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:01.336825   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:01.336852   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:03.866716   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:03.876694   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:03.876752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:03.900150   46141 cri.go:89] found id: ""
	I1202 19:26:03.900164   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.900170   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:03.900176   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:03.900231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:03.928045   46141 cri.go:89] found id: ""
	I1202 19:26:03.928059   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.928066   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:03.928071   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:03.928128   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:03.952359   46141 cri.go:89] found id: ""
	I1202 19:26:03.952372   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.952379   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:03.952384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:03.952439   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:03.977113   46141 cri.go:89] found id: ""
	I1202 19:26:03.977127   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.977134   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:03.977139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:03.977195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:04.001871   46141 cri.go:89] found id: ""
	I1202 19:26:04.001884   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.001890   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:04.001896   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:04.001950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:04.029122   46141 cri.go:89] found id: ""
	I1202 19:26:04.029136   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.029143   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:04.029148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:04.029206   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:04.059191   46141 cri.go:89] found id: ""
	I1202 19:26:04.059205   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.059212   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:04.059219   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:04.059228   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:04.125149   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:04.125166   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:04.136144   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:04.136159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:04.198077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:04.198088   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:04.198098   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:04.273217   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:04.273235   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:06.807224   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:06.817250   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:06.817318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:06.845880   46141 cri.go:89] found id: ""
	I1202 19:26:06.845895   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.845902   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:06.845908   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:06.845963   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:06.870846   46141 cri.go:89] found id: ""
	I1202 19:26:06.870859   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.870866   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:06.870871   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:06.870927   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:06.896774   46141 cri.go:89] found id: ""
	I1202 19:26:06.896788   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.896794   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:06.896800   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:06.896857   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:06.924394   46141 cri.go:89] found id: ""
	I1202 19:26:06.924407   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.924414   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:06.924419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:06.924477   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:06.951775   46141 cri.go:89] found id: ""
	I1202 19:26:06.951789   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.951796   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:06.951804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:06.951865   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:06.976656   46141 cri.go:89] found id: ""
	I1202 19:26:06.976674   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.976682   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:06.976687   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:06.976743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:07.002712   46141 cri.go:89] found id: ""
	I1202 19:26:07.002726   46141 logs.go:282] 0 containers: []
	W1202 19:26:07.002741   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:07.002753   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:07.002764   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:07.071978   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:07.071988   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:07.072001   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:07.148506   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:07.148525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:07.177526   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:07.177542   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:07.244597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:07.244614   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:09.755980   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:09.766062   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:09.766136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:09.791272   46141 cri.go:89] found id: ""
	I1202 19:26:09.791285   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.791292   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:09.791297   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:09.791352   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:09.819809   46141 cri.go:89] found id: ""
	I1202 19:26:09.819822   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.819829   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:09.819834   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:09.819890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:09.845138   46141 cri.go:89] found id: ""
	I1202 19:26:09.845151   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.845158   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:09.845163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:09.845233   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:09.869181   46141 cri.go:89] found id: ""
	I1202 19:26:09.869194   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.869201   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:09.869215   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:09.869269   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:09.894166   46141 cri.go:89] found id: ""
	I1202 19:26:09.894180   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.894187   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:09.894192   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:09.894246   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:09.918581   46141 cri.go:89] found id: ""
	I1202 19:26:09.918594   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.918601   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:09.918606   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:09.918670   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:09.943199   46141 cri.go:89] found id: ""
	I1202 19:26:09.943213   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.943219   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:09.943227   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:09.943238   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:10.008528   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:10.008545   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:10.019265   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:10.019283   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:10.097788   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:10.097798   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:10.097814   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:10.175343   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:10.175361   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:12.705105   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:12.714930   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:12.714992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:12.738794   46141 cri.go:89] found id: ""
	I1202 19:26:12.738808   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.738814   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:12.738819   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:12.738893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:12.763061   46141 cri.go:89] found id: ""
	I1202 19:26:12.763074   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.763088   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:12.763094   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:12.763147   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:12.789884   46141 cri.go:89] found id: ""
	I1202 19:26:12.789897   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.789904   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:12.789909   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:12.789967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:12.815897   46141 cri.go:89] found id: ""
	I1202 19:26:12.815911   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.815918   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:12.815923   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:12.815980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:12.842434   46141 cri.go:89] found id: ""
	I1202 19:26:12.842448   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.842455   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:12.842461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:12.842521   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:12.867046   46141 cri.go:89] found id: ""
	I1202 19:26:12.867059   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.867066   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:12.867071   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:12.867136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:12.891464   46141 cri.go:89] found id: ""
	I1202 19:26:12.891478   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.891484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:12.891492   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:12.891503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:12.902121   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:12.902136   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:12.963892   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:12.963902   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:12.963913   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:13.043923   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:13.043944   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:13.073893   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:13.073909   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:15.646846   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:15.656672   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:15.656727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:15.685223   46141 cri.go:89] found id: ""
	I1202 19:26:15.685236   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.685243   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:15.685249   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:15.685309   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:15.710499   46141 cri.go:89] found id: ""
	I1202 19:26:15.710513   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.710520   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:15.710526   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:15.710582   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:15.734748   46141 cri.go:89] found id: ""
	I1202 19:26:15.734762   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.734775   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:15.734780   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:15.734833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:15.759539   46141 cri.go:89] found id: ""
	I1202 19:26:15.759551   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.759558   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:15.759564   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:15.759617   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:15.788358   46141 cri.go:89] found id: ""
	I1202 19:26:15.788371   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.788378   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:15.788383   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:15.788443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:15.813365   46141 cri.go:89] found id: ""
	I1202 19:26:15.813379   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.813386   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:15.813391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:15.813445   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:15.842535   46141 cri.go:89] found id: ""
	I1202 19:26:15.842550   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.842558   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:15.842565   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:15.842576   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:15.853891   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:15.853906   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:15.921614   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:15.921632   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:15.921643   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:15.997309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:15.997326   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:16.029023   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:16.029039   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.596080   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:18.605748   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:18.605804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:18.630525   46141 cri.go:89] found id: ""
	I1202 19:26:18.630539   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.630546   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:18.630551   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:18.630608   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:18.655399   46141 cri.go:89] found id: ""
	I1202 19:26:18.655412   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.655419   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:18.655425   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:18.655479   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:18.681041   46141 cri.go:89] found id: ""
	I1202 19:26:18.681054   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.681061   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:18.681067   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:18.681123   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:18.710155   46141 cri.go:89] found id: ""
	I1202 19:26:18.710168   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.710181   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:18.710187   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:18.710241   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:18.735242   46141 cri.go:89] found id: ""
	I1202 19:26:18.735256   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.735263   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:18.735268   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:18.735327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:18.761061   46141 cri.go:89] found id: ""
	I1202 19:26:18.761074   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.761081   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:18.761087   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:18.761149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:18.788428   46141 cri.go:89] found id: ""
	I1202 19:26:18.788441   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.788448   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:18.788456   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:18.788475   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:18.822471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:18.822487   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.888827   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:18.888844   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:18.899937   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:18.899952   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:18.968344   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:18.968353   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:18.968365   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.544554   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:21.555728   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:21.555784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:21.584623   46141 cri.go:89] found id: ""
	I1202 19:26:21.584639   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.584646   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:21.584650   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:21.584710   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:21.614647   46141 cri.go:89] found id: ""
	I1202 19:26:21.614660   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.614668   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:21.614672   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:21.614731   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:21.642925   46141 cri.go:89] found id: ""
	I1202 19:26:21.642938   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.642945   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:21.642950   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:21.643003   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:21.668180   46141 cri.go:89] found id: ""
	I1202 19:26:21.668194   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.668202   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:21.668207   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:21.668263   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:21.693295   46141 cri.go:89] found id: ""
	I1202 19:26:21.693308   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.693315   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:21.693321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:21.693375   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:21.720442   46141 cri.go:89] found id: ""
	I1202 19:26:21.720456   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.720463   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:21.720477   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:21.720550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:21.745858   46141 cri.go:89] found id: ""
	I1202 19:26:21.745872   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.745879   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:21.745887   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:21.745898   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.821815   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:21.821832   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:21.852228   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:21.852243   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:21.925590   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:21.925615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:21.936630   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:21.936646   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:22.000893   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:24.501139   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:24.511236   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:24.511298   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:24.536070   46141 cri.go:89] found id: ""
	I1202 19:26:24.536084   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.536091   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:24.536096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:24.536152   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:24.570105   46141 cri.go:89] found id: ""
	I1202 19:26:24.570118   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.570125   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:24.570131   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:24.570195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:24.602200   46141 cri.go:89] found id: ""
	I1202 19:26:24.602213   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.602220   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:24.602225   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:24.602286   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:24.627716   46141 cri.go:89] found id: ""
	I1202 19:26:24.627730   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.627737   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:24.627743   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:24.627799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:24.653555   46141 cri.go:89] found id: ""
	I1202 19:26:24.653568   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.653575   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:24.653580   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:24.653638   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:24.681296   46141 cri.go:89] found id: ""
	I1202 19:26:24.681310   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.681316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:24.681322   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:24.681376   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:24.707692   46141 cri.go:89] found id: ""
	I1202 19:26:24.707705   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.707714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:24.707721   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:24.707731   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:24.782015   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:24.782033   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:24.809710   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:24.809725   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:24.880042   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:24.880061   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:24.890565   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:24.890580   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:24.952416   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.452632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:27.462873   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:27.462933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:27.487753   46141 cri.go:89] found id: ""
	I1202 19:26:27.487766   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.487773   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:27.487778   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:27.487835   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:27.512748   46141 cri.go:89] found id: ""
	I1202 19:26:27.512762   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.512771   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:27.512776   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:27.512833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:27.542024   46141 cri.go:89] found id: ""
	I1202 19:26:27.542038   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.542045   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:27.542051   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:27.542109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:27.579960   46141 cri.go:89] found id: ""
	I1202 19:26:27.579973   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.579979   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:27.579989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:27.580045   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:27.608229   46141 cri.go:89] found id: ""
	I1202 19:26:27.608242   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.608250   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:27.608255   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:27.608318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:27.634613   46141 cri.go:89] found id: ""
	I1202 19:26:27.634626   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.634633   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:27.634639   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:27.634695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:27.659548   46141 cri.go:89] found id: ""
	I1202 19:26:27.659562   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.659569   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:27.659576   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:27.659587   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:27.727694   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.727704   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:27.727715   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:27.802309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:27.802327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:27.831471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:27.831486   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:27.899227   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:27.899244   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.413752   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:30.423684   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:30.423741   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:30.447673   46141 cri.go:89] found id: ""
	I1202 19:26:30.447688   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.447695   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:30.447706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:30.447762   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:30.473178   46141 cri.go:89] found id: ""
	I1202 19:26:30.473191   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.473198   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:30.473203   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:30.473258   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:30.499098   46141 cri.go:89] found id: ""
	I1202 19:26:30.499112   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.499119   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:30.499124   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:30.499181   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:30.528083   46141 cri.go:89] found id: ""
	I1202 19:26:30.528096   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.528103   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:30.528108   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:30.528165   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:30.562772   46141 cri.go:89] found id: ""
	I1202 19:26:30.562784   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.562791   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:30.562796   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:30.562852   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:30.592139   46141 cri.go:89] found id: ""
	I1202 19:26:30.592152   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.592158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:30.592163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:30.592217   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:30.624862   46141 cri.go:89] found id: ""
	I1202 19:26:30.624875   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.624882   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:30.624889   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:30.624901   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.636356   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:30.636374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:30.698721   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:30.698731   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:30.698745   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:30.775221   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:30.775240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:30.812702   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:30.812718   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.383460   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:33.393252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:33.393318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:33.417381   46141 cri.go:89] found id: ""
	I1202 19:26:33.417394   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.417401   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:33.417407   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:33.417467   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:33.441554   46141 cri.go:89] found id: ""
	I1202 19:26:33.441567   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.441574   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:33.441580   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:33.441633   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:33.466601   46141 cri.go:89] found id: ""
	I1202 19:26:33.466615   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.466621   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:33.466627   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:33.466680   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:33.494897   46141 cri.go:89] found id: ""
	I1202 19:26:33.494910   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.494917   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:33.494922   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:33.494978   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:33.519464   46141 cri.go:89] found id: ""
	I1202 19:26:33.519478   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.519485   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:33.519490   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:33.519549   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:33.556189   46141 cri.go:89] found id: ""
	I1202 19:26:33.556203   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.556210   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:33.556216   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:33.556276   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:33.592420   46141 cri.go:89] found id: ""
	I1202 19:26:33.592436   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.592442   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:33.592459   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:33.592469   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:33.669109   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:33.669128   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:33.703954   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:33.703970   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.773221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:33.773240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:33.784054   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:33.784068   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:33.846758   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:36.347013   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:36.357404   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:36.357461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:36.383307   46141 cri.go:89] found id: ""
	I1202 19:26:36.383322   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.383330   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:36.383336   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:36.383391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:36.409566   46141 cri.go:89] found id: ""
	I1202 19:26:36.409580   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.409588   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:36.409593   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:36.409682   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:36.435280   46141 cri.go:89] found id: ""
	I1202 19:26:36.435294   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.435300   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:36.435306   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:36.435366   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:36.460290   46141 cri.go:89] found id: ""
	I1202 19:26:36.460304   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.460310   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:36.460316   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:36.460368   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:36.484719   46141 cri.go:89] found id: ""
	I1202 19:26:36.484733   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.484740   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:36.484746   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:36.484800   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:36.510020   46141 cri.go:89] found id: ""
	I1202 19:26:36.510034   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.510042   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:36.510048   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:36.510106   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:36.536500   46141 cri.go:89] found id: ""
	I1202 19:26:36.536515   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.536521   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:36.536529   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:36.536539   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:36.616617   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:36.616636   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:36.647169   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:36.647185   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:36.711768   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:36.711787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:36.723184   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:36.723200   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:36.795174   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:39.296074   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:39.306024   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:39.306085   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:39.335889   46141 cri.go:89] found id: ""
	I1202 19:26:39.335915   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.335923   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:39.335928   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:39.335990   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:39.361424   46141 cri.go:89] found id: ""
	I1202 19:26:39.361438   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.361445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:39.361450   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:39.361505   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:39.387900   46141 cri.go:89] found id: ""
	I1202 19:26:39.387913   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.387920   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:39.387925   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:39.387988   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:39.413856   46141 cri.go:89] found id: ""
	I1202 19:26:39.413871   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.413878   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:39.413884   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:39.413938   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:39.439194   46141 cri.go:89] found id: ""
	I1202 19:26:39.439208   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.439215   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:39.439221   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:39.439278   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:39.465337   46141 cri.go:89] found id: ""
	I1202 19:26:39.465351   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.465359   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:39.465375   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:39.465442   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:39.493124   46141 cri.go:89] found id: ""
	I1202 19:26:39.493137   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.493144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:39.493152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:39.493162   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:39.573759   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:39.573780   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:39.608655   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:39.608671   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:39.681483   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:39.681503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:39.692678   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:39.692693   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:39.753005   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:42.253264   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:42.266584   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:42.266662   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:42.301576   46141 cri.go:89] found id: ""
	I1202 19:26:42.301591   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.301599   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:42.301605   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:42.301727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:42.360247   46141 cri.go:89] found id: ""
	I1202 19:26:42.360262   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.360269   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:42.360275   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:42.360344   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:42.390741   46141 cri.go:89] found id: ""
	I1202 19:26:42.390756   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.390766   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:42.390776   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:42.390853   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:42.419121   46141 cri.go:89] found id: ""
	I1202 19:26:42.419137   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.419144   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:42.419152   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:42.419225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:42.446778   46141 cri.go:89] found id: ""
	I1202 19:26:42.446792   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.446811   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:42.446816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:42.446884   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:42.472520   46141 cri.go:89] found id: ""
	I1202 19:26:42.472534   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.472541   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:42.472546   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:42.472603   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:42.498770   46141 cri.go:89] found id: ""
	I1202 19:26:42.498783   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.498789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:42.498797   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:42.498806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:42.579006   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:42.579025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:42.609942   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:42.609958   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:42.683995   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:42.684022   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:42.695018   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:42.695038   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:42.757205   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.257372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:45.279258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:45.279391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:45.324360   46141 cri.go:89] found id: ""
	I1202 19:26:45.324374   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.324382   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:45.324389   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:45.324461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:45.357406   46141 cri.go:89] found id: ""
	I1202 19:26:45.357438   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.357445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:45.357451   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:45.357520   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:45.390814   46141 cri.go:89] found id: ""
	I1202 19:26:45.390829   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.390836   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:45.390842   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:45.390910   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:45.422248   46141 cri.go:89] found id: ""
	I1202 19:26:45.422262   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.422269   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:45.422274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:45.422331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:45.447593   46141 cri.go:89] found id: ""
	I1202 19:26:45.447607   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.447614   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:45.447618   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:45.447669   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:45.473750   46141 cri.go:89] found id: ""
	I1202 19:26:45.473763   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.473770   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:45.473775   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:45.473838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:45.502345   46141 cri.go:89] found id: ""
	I1202 19:26:45.502358   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.502364   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:45.502373   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:45.502383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:45.569300   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:45.569319   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:45.581070   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:45.581086   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:45.647631   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.647641   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:45.647652   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:45.722681   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:45.722699   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:48.249966   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:48.259729   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:48.259788   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:48.284968   46141 cri.go:89] found id: ""
	I1202 19:26:48.284981   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.284995   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:48.285001   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:48.285058   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:48.312117   46141 cri.go:89] found id: ""
	I1202 19:26:48.312131   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.312138   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:48.312143   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:48.312196   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:48.338030   46141 cri.go:89] found id: ""
	I1202 19:26:48.338044   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.338050   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:48.338055   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:48.338108   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:48.363655   46141 cri.go:89] found id: ""
	I1202 19:26:48.363668   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.363675   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:48.363680   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:48.363732   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:48.388544   46141 cri.go:89] found id: ""
	I1202 19:26:48.388565   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.388572   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:48.388577   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:48.388631   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:48.413919   46141 cri.go:89] found id: ""
	I1202 19:26:48.413932   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.413939   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:48.413962   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:48.414018   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:48.438768   46141 cri.go:89] found id: ""
	I1202 19:26:48.438782   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.438789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:48.438796   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:48.438806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:48.508480   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:48.508498   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:48.519336   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:48.519354   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:48.612485   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:48.612495   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:48.612505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:48.689541   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:48.689559   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.220741   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:51.230995   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:51.231052   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:51.257767   46141 cri.go:89] found id: ""
	I1202 19:26:51.257786   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.257794   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:51.257801   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:51.257856   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:51.282338   46141 cri.go:89] found id: ""
	I1202 19:26:51.282351   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.282358   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:51.282363   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:51.282425   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:51.311031   46141 cri.go:89] found id: ""
	I1202 19:26:51.311044   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.311051   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:51.311056   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:51.311111   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:51.339385   46141 cri.go:89] found id: ""
	I1202 19:26:51.339399   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.339405   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:51.339410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:51.339476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:51.368365   46141 cri.go:89] found id: ""
	I1202 19:26:51.368379   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.368386   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:51.368391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:51.368455   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:51.393598   46141 cri.go:89] found id: ""
	I1202 19:26:51.393611   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.393618   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:51.393623   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:51.393696   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:51.423516   46141 cri.go:89] found id: ""
	I1202 19:26:51.423529   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.423536   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:51.423543   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:51.423553   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:51.488010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:51.488020   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:51.488031   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:51.568503   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:51.568521   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.604611   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:51.604626   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:51.673166   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:51.673184   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:54.184676   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:54.194875   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:54.194933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:54.219830   46141 cri.go:89] found id: ""
	I1202 19:26:54.219850   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.219857   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:54.219863   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:54.219922   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:54.245201   46141 cri.go:89] found id: ""
	I1202 19:26:54.245214   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.245221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:54.245228   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:54.245295   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:54.270718   46141 cri.go:89] found id: ""
	I1202 19:26:54.270732   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.270739   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:54.270744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:54.270799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:54.295488   46141 cri.go:89] found id: ""
	I1202 19:26:54.295501   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.295508   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:54.295513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:54.295568   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:54.320597   46141 cri.go:89] found id: ""
	I1202 19:26:54.320610   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.320617   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:54.320622   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:54.320675   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:54.348002   46141 cri.go:89] found id: ""
	I1202 19:26:54.348017   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.348024   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:54.348029   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:54.348089   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:54.374189   46141 cri.go:89] found id: ""
	I1202 19:26:54.374203   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.374209   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:54.374217   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:54.374229   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:54.439569   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:54.439581   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:54.439594   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:54.524214   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:54.524233   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:54.564820   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:54.564841   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:54.639908   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:54.639928   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.151760   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:57.161952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:57.162007   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:57.186061   46141 cri.go:89] found id: ""
	I1202 19:26:57.186074   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.186081   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:57.186087   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:57.186144   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:57.211829   46141 cri.go:89] found id: ""
	I1202 19:26:57.211843   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.211850   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:57.211856   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:57.211914   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:57.237584   46141 cri.go:89] found id: ""
	I1202 19:26:57.237598   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.237605   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:57.237610   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:57.237697   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:57.266726   46141 cri.go:89] found id: ""
	I1202 19:26:57.266740   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.266746   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:57.266752   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:57.266810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:57.293971   46141 cri.go:89] found id: ""
	I1202 19:26:57.293984   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.293991   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:57.293996   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:57.294050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:57.322602   46141 cri.go:89] found id: ""
	I1202 19:26:57.322615   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.322622   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:57.322628   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:57.322685   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:57.347221   46141 cri.go:89] found id: ""
	I1202 19:26:57.347234   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.347249   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:57.347257   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:57.347267   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.358475   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:57.358490   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:57.420357   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:57.420367   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:57.420378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:57.498037   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:57.498057   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:57.530853   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:57.530870   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.105404   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:00.167692   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:00.167773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:00.310630   46141 cri.go:89] found id: ""
	I1202 19:27:00.310644   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.310652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:00.310659   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:00.310726   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:00.379652   46141 cri.go:89] found id: ""
	I1202 19:27:00.379665   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.379673   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:00.379678   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:00.379740   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:00.417470   46141 cri.go:89] found id: ""
	I1202 19:27:00.417487   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.417496   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:00.417501   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:00.417571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:00.459129   46141 cri.go:89] found id: ""
	I1202 19:27:00.459144   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.459151   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:00.459157   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:00.459225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:00.491958   46141 cri.go:89] found id: ""
	I1202 19:27:00.491973   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.491980   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:00.491986   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:00.492050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:00.522076   46141 cri.go:89] found id: ""
	I1202 19:27:00.522091   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.522098   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:00.522110   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:00.522185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:00.560640   46141 cri.go:89] found id: ""
	I1202 19:27:00.560654   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.560661   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:00.560668   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:00.560677   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:00.652444   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:00.652464   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:00.684426   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:00.684441   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.751419   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:00.751437   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:00.763771   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:00.763786   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:00.826022   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.326866   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:03.336590   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:03.336644   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:03.361031   46141 cri.go:89] found id: ""
	I1202 19:27:03.361045   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.361051   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:03.361057   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:03.361109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:03.385187   46141 cri.go:89] found id: ""
	I1202 19:27:03.385201   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.385208   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:03.385214   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:03.385268   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:03.410330   46141 cri.go:89] found id: ""
	I1202 19:27:03.410343   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.410350   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:03.410355   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:03.410412   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:03.435485   46141 cri.go:89] found id: ""
	I1202 19:27:03.435499   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.435505   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:03.435511   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:03.435565   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:03.460310   46141 cri.go:89] found id: ""
	I1202 19:27:03.460323   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.460330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:03.460335   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:03.460389   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:03.488041   46141 cri.go:89] found id: ""
	I1202 19:27:03.488054   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.488061   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:03.488066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:03.488120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:03.512748   46141 cri.go:89] found id: ""
	I1202 19:27:03.512761   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.512768   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:03.512776   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:03.512787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:03.523642   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:03.523658   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:03.617573   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.617591   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:03.617602   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:03.694365   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:03.694383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:03.726522   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:03.726537   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.302579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:06.312543   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:06.312604   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:06.337638   46141 cri.go:89] found id: ""
	I1202 19:27:06.337693   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.337700   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:06.337706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:06.337764   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:06.362621   46141 cri.go:89] found id: ""
	I1202 19:27:06.362634   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.362641   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:06.362646   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:06.362698   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:06.387105   46141 cri.go:89] found id: ""
	I1202 19:27:06.387121   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.387127   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:06.387133   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:06.387186   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:06.415681   46141 cri.go:89] found id: ""
	I1202 19:27:06.415694   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.415700   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:06.415706   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:06.415760   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:06.444254   46141 cri.go:89] found id: ""
	I1202 19:27:06.444267   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.444274   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:06.444279   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:06.444337   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:06.468778   46141 cri.go:89] found id: ""
	I1202 19:27:06.468791   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.468799   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:06.468805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:06.468859   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:06.493545   46141 cri.go:89] found id: ""
	I1202 19:27:06.493558   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.493564   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:06.493572   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:06.493583   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:06.567943   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:06.567953   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:06.567963   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:06.656325   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:06.656344   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:06.685907   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:06.685923   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.756875   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:06.756894   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.270257   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:09.280597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:09.280658   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:09.304838   46141 cri.go:89] found id: ""
	I1202 19:27:09.304856   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.304863   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:09.304872   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:09.304926   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:09.329409   46141 cri.go:89] found id: ""
	I1202 19:27:09.329422   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.329430   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:09.329435   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:09.329491   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:09.353934   46141 cri.go:89] found id: ""
	I1202 19:27:09.353948   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.353954   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:09.353960   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:09.354016   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:09.379084   46141 cri.go:89] found id: ""
	I1202 19:27:09.379098   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.379105   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:09.379111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:09.379166   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:09.404377   46141 cri.go:89] found id: ""
	I1202 19:27:09.404391   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.404398   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:09.404403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:09.404459   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:09.429248   46141 cri.go:89] found id: ""
	I1202 19:27:09.429262   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.429269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:09.429274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:09.429331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:09.453340   46141 cri.go:89] found id: ""
	I1202 19:27:09.453354   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.453360   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:09.453367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:09.453378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:09.519114   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:09.519131   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.530268   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:09.530282   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:09.622354   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:09.622364   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:09.622374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:09.698919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:09.698936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.231072   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:12.240732   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:12.240796   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:12.267547   46141 cri.go:89] found id: ""
	I1202 19:27:12.267560   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.267566   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:12.267572   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:12.267626   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:12.291129   46141 cri.go:89] found id: ""
	I1202 19:27:12.291143   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.291150   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:12.291155   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:12.291209   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:12.316228   46141 cri.go:89] found id: ""
	I1202 19:27:12.316242   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.316248   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:12.316253   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:12.316305   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:12.340306   46141 cri.go:89] found id: ""
	I1202 19:27:12.340319   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.340326   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:12.340331   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:12.340386   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:12.365210   46141 cri.go:89] found id: ""
	I1202 19:27:12.365224   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.365230   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:12.365239   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:12.365299   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:12.393299   46141 cri.go:89] found id: ""
	I1202 19:27:12.393312   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.393319   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:12.393327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:12.393387   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:12.418063   46141 cri.go:89] found id: ""
	I1202 19:27:12.418089   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.418096   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:12.418104   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:12.418114   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.450419   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:12.450434   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:12.520281   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:12.520300   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:12.531244   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:12.531260   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:12.614672   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:12.614681   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:12.614691   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.191935   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:15.202075   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:15.202136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:15.227991   46141 cri.go:89] found id: ""
	I1202 19:27:15.228004   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.228011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:15.228016   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:15.228073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:15.253837   46141 cri.go:89] found id: ""
	I1202 19:27:15.253850   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.253856   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:15.253861   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:15.253916   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:15.279658   46141 cri.go:89] found id: ""
	I1202 19:27:15.279671   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.279677   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:15.279682   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:15.279735   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:15.303415   46141 cri.go:89] found id: ""
	I1202 19:27:15.303429   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.303435   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:15.303440   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:15.303496   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:15.327738   46141 cri.go:89] found id: ""
	I1202 19:27:15.327752   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.327759   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:15.327764   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:15.327818   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:15.353097   46141 cri.go:89] found id: ""
	I1202 19:27:15.353110   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.353117   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:15.353122   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:15.353175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:15.377713   46141 cri.go:89] found id: ""
	I1202 19:27:15.377726   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.377734   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:15.377741   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:15.377751   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:15.443006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:15.443024   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:15.453500   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:15.453519   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:15.518415   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:15.518425   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:15.518438   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.596810   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:15.596828   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:18.130179   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:18.140204   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:18.140265   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:18.167800   46141 cri.go:89] found id: ""
	I1202 19:27:18.167814   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.167821   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:18.167826   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:18.167882   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:18.191990   46141 cri.go:89] found id: ""
	I1202 19:27:18.192003   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.192010   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:18.192015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:18.192072   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:18.216815   46141 cri.go:89] found id: ""
	I1202 19:27:18.216828   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.216835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:18.216840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:18.216894   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:18.240868   46141 cri.go:89] found id: ""
	I1202 19:27:18.240881   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.240888   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:18.240894   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:18.240950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:18.265457   46141 cri.go:89] found id: ""
	I1202 19:27:18.265470   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.265476   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:18.265482   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:18.265533   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:18.289248   46141 cri.go:89] found id: ""
	I1202 19:27:18.289262   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.289269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:18.289275   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:18.289339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:18.312672   46141 cri.go:89] found id: ""
	I1202 19:27:18.312685   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.312692   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:18.312700   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:18.312710   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:18.380764   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:18.380781   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:18.391485   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:18.391501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:18.453699   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:18.453709   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:18.453720   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:18.530116   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:18.530134   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.069567   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:21.079484   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:21.079550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:21.103488   46141 cri.go:89] found id: ""
	I1202 19:27:21.103503   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.103511   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:21.103517   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:21.103572   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:21.130794   46141 cri.go:89] found id: ""
	I1202 19:27:21.130807   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.130814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:21.130819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:21.130876   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:21.154925   46141 cri.go:89] found id: ""
	I1202 19:27:21.154940   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.154946   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:21.154952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:21.155008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:21.183874   46141 cri.go:89] found id: ""
	I1202 19:27:21.183887   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.183895   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:21.183900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:21.183956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:21.208723   46141 cri.go:89] found id: ""
	I1202 19:27:21.208736   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.208744   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:21.208750   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:21.208805   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:21.233965   46141 cri.go:89] found id: ""
	I1202 19:27:21.233978   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.233985   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:21.233990   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:21.234046   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:21.257686   46141 cri.go:89] found id: ""
	I1202 19:27:21.257699   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.257706   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:21.257714   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:21.257724   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:21.318236   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:21.318250   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:21.318261   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:21.395292   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:21.395310   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.422658   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:21.422674   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:21.489157   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:21.489174   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.001769   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:24.011691   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:24.011752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:24.042533   46141 cri.go:89] found id: ""
	I1202 19:27:24.042554   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.042561   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:24.042566   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:24.042624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:24.070666   46141 cri.go:89] found id: ""
	I1202 19:27:24.070679   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.070686   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:24.070691   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:24.070753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:24.095535   46141 cri.go:89] found id: ""
	I1202 19:27:24.095549   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.095556   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:24.095561   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:24.095619   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:24.123758   46141 cri.go:89] found id: ""
	I1202 19:27:24.123772   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.123779   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:24.123784   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:24.123838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:24.149095   46141 cri.go:89] found id: ""
	I1202 19:27:24.149108   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.149114   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:24.149120   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:24.149175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:24.174002   46141 cri.go:89] found id: ""
	I1202 19:27:24.174015   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.174022   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:24.174027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:24.174125   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:24.200105   46141 cri.go:89] found id: ""
	I1202 19:27:24.200119   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.200126   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:24.200133   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:24.200144   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:24.266202   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:24.266219   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.277238   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:24.277253   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:24.343395   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:24.343404   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:24.343414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:24.424919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:24.424936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:26.953925   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:26.963713   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:26.963769   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:26.988142   46141 cri.go:89] found id: ""
	I1202 19:27:26.988156   46141 logs.go:282] 0 containers: []
	W1202 19:27:26.988163   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:26.988168   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:26.988223   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:27.013673   46141 cri.go:89] found id: ""
	I1202 19:27:27.013687   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.013694   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:27.013699   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:27.013754   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:27.039371   46141 cri.go:89] found id: ""
	I1202 19:27:27.039384   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.039391   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:27.039396   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:27.039452   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:27.062786   46141 cri.go:89] found id: ""
	I1202 19:27:27.062800   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.062807   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:27.062812   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:27.062868   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:27.087058   46141 cri.go:89] found id: ""
	I1202 19:27:27.087072   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.087078   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:27.087083   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:27.087139   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:27.111397   46141 cri.go:89] found id: ""
	I1202 19:27:27.111410   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.111417   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:27.111422   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:27.111474   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:27.134753   46141 cri.go:89] found id: ""
	I1202 19:27:27.134774   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.134781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:27.134788   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:27.134798   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:27.200051   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:27.200069   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:27.210589   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:27.210603   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:27.274673   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:27.274684   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:27.274695   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:27.350589   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:27.350607   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:29.879009   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:29.888757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:29.888814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:29.914106   46141 cri.go:89] found id: ""
	I1202 19:27:29.914119   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.914126   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:29.914131   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:29.914198   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:29.945870   46141 cri.go:89] found id: ""
	I1202 19:27:29.945883   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.945890   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:29.945895   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:29.945951   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:29.972147   46141 cri.go:89] found id: ""
	I1202 19:27:29.972161   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.972168   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:29.972173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:29.972237   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:29.999569   46141 cri.go:89] found id: ""
	I1202 19:27:29.999583   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.999590   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:29.999595   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:29.999654   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:30.048258   46141 cri.go:89] found id: ""
	I1202 19:27:30.048273   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.048281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:30.048286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:30.048361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:30.083224   46141 cri.go:89] found id: ""
	I1202 19:27:30.083238   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.083245   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:30.083251   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:30.083308   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:30.113945   46141 cri.go:89] found id: ""
	I1202 19:27:30.113959   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.113966   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:30.113975   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:30.113986   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:30.192106   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:30.192125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:30.221887   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:30.221904   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:30.290188   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:30.290204   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:30.301167   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:30.301182   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:30.362881   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:32.863109   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:32.872876   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:32.872937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:32.897586   46141 cri.go:89] found id: ""
	I1202 19:27:32.897603   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.897610   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:32.897615   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:32.897706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:32.924245   46141 cri.go:89] found id: ""
	I1202 19:27:32.924258   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.924265   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:32.924270   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:32.924332   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:32.951911   46141 cri.go:89] found id: ""
	I1202 19:27:32.951925   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.951932   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:32.951938   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:32.951992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:32.975852   46141 cri.go:89] found id: ""
	I1202 19:27:32.975865   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.975872   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:32.975878   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:32.975933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:33.000511   46141 cri.go:89] found id: ""
	I1202 19:27:33.000525   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.000532   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:33.000537   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:33.000591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:33.030910   46141 cri.go:89] found id: ""
	I1202 19:27:33.030924   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.030931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:33.030936   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:33.030993   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:33.055909   46141 cri.go:89] found id: ""
	I1202 19:27:33.055922   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.055929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:33.055937   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:33.055947   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:33.121449   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:33.121471   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:33.134922   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:33.134955   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:33.198500   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:33.198512   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:33.198524   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:33.275340   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:33.275358   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:35.803184   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:35.814556   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:35.814622   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:35.843911   46141 cri.go:89] found id: ""
	I1202 19:27:35.843927   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.843934   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:35.843939   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:35.844010   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:35.872792   46141 cri.go:89] found id: ""
	I1202 19:27:35.872807   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.872814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:35.872819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:35.872885   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:35.899563   46141 cri.go:89] found id: ""
	I1202 19:27:35.899576   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.899583   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:35.899588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:35.899642   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:35.929110   46141 cri.go:89] found id: ""
	I1202 19:27:35.929133   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.929141   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:35.929147   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:35.929214   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:35.953603   46141 cri.go:89] found id: ""
	I1202 19:27:35.953617   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.953624   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:35.953629   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:35.953706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:35.978487   46141 cri.go:89] found id: ""
	I1202 19:27:35.978501   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.978508   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:35.978513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:35.978571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:36.002610   46141 cri.go:89] found id: ""
	I1202 19:27:36.002623   46141 logs.go:282] 0 containers: []
	W1202 19:27:36.002629   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:36.002636   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:36.002647   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:36.078660   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:36.078679   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:36.108572   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:36.108589   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:36.174842   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:36.174858   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:36.185725   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:36.185740   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:36.248843   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:38.749933   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:38.759902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:38.759959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:38.784371   46141 cri.go:89] found id: ""
	I1202 19:27:38.784384   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.784390   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:38.784396   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:38.784449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:38.813903   46141 cri.go:89] found id: ""
	I1202 19:27:38.813918   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.813925   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:38.813930   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:38.813986   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:38.847704   46141 cri.go:89] found id: ""
	I1202 19:27:38.847718   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.847724   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:38.847730   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:38.847786   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:38.874126   46141 cri.go:89] found id: ""
	I1202 19:27:38.874139   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.874146   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:38.874151   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:38.874204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:38.899808   46141 cri.go:89] found id: ""
	I1202 19:27:38.899822   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.899829   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:38.899835   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:38.899890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:38.924777   46141 cri.go:89] found id: ""
	I1202 19:27:38.924791   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.924798   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:38.924804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:38.924898   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:38.949761   46141 cri.go:89] found id: ""
	I1202 19:27:38.949774   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.949781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:38.949788   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:38.949802   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:39.008770   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:39.008780   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:39.008794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:39.090107   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:39.090125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:39.122398   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:39.122414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:39.187817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:39.187833   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.698611   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:41.708767   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:41.708837   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:41.733990   46141 cri.go:89] found id: ""
	I1202 19:27:41.734004   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.734011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:41.734017   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:41.734080   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:41.759279   46141 cri.go:89] found id: ""
	I1202 19:27:41.759293   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.759299   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:41.759305   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:41.759359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:41.793259   46141 cri.go:89] found id: ""
	I1202 19:27:41.793272   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.793278   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:41.793284   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:41.793339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:41.821458   46141 cri.go:89] found id: ""
	I1202 19:27:41.821471   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.821484   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:41.821489   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:41.821545   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:41.849637   46141 cri.go:89] found id: ""
	I1202 19:27:41.849670   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.849678   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:41.849683   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:41.849743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:41.881100   46141 cri.go:89] found id: ""
	I1202 19:27:41.881113   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.881121   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:41.881127   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:41.881189   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:41.906054   46141 cri.go:89] found id: ""
	I1202 19:27:41.906067   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.906074   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:41.906082   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:41.906092   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.916746   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:41.916761   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:41.979747   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:41.979757   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:41.979767   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:42.054766   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:42.054787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:42.086163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:42.086187   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.697773   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:44.707597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:44.707659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:44.733158   46141 cri.go:89] found id: ""
	I1202 19:27:44.733184   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.733191   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:44.733196   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:44.733261   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:44.757757   46141 cri.go:89] found id: ""
	I1202 19:27:44.757771   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.757778   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:44.757784   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:44.757843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:44.783874   46141 cri.go:89] found id: ""
	I1202 19:27:44.783888   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.783897   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:44.783902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:44.783959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:44.816248   46141 cri.go:89] found id: ""
	I1202 19:27:44.816261   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.816268   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:44.816273   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:44.816327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:44.847419   46141 cri.go:89] found id: ""
	I1202 19:27:44.847433   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.847440   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:44.847445   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:44.847504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:44.873837   46141 cri.go:89] found id: ""
	I1202 19:27:44.873851   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.873858   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:44.873863   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:44.873918   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:44.897843   46141 cri.go:89] found id: ""
	I1202 19:27:44.897856   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.897863   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:44.897871   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:44.897881   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.966499   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:44.966516   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:44.978644   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:44.978659   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:45.054728   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:45.054738   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:45.054765   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:45.162639   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:45.162660   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.718000   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:47.727890   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:47.727953   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:47.752168   46141 cri.go:89] found id: ""
	I1202 19:27:47.752181   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.752188   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:47.752193   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:47.752253   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:47.776058   46141 cri.go:89] found id: ""
	I1202 19:27:47.776071   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.776078   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:47.776086   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:47.776143   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:47.809050   46141 cri.go:89] found id: ""
	I1202 19:27:47.809065   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.809072   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:47.809078   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:47.809142   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:47.851196   46141 cri.go:89] found id: ""
	I1202 19:27:47.851209   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.851222   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:47.851227   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:47.851285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:47.877019   46141 cri.go:89] found id: ""
	I1202 19:27:47.877033   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.877039   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:47.877045   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:47.877104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:47.906595   46141 cri.go:89] found id: ""
	I1202 19:27:47.906609   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.906616   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:47.906621   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:47.906684   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:47.931137   46141 cri.go:89] found id: ""
	I1202 19:27:47.931150   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.931157   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:47.931165   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:47.931175   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.960778   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:47.960794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:48.026698   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:48.026716   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:48.039024   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:48.039040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:48.104995   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:48.105014   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:48.105026   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:50.681972   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:50.691952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:50.692008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:50.716419   46141 cri.go:89] found id: ""
	I1202 19:27:50.716432   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.716438   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:50.716443   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:50.716497   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:50.743698   46141 cri.go:89] found id: ""
	I1202 19:27:50.743712   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.743718   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:50.743723   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:50.743778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:50.768264   46141 cri.go:89] found id: ""
	I1202 19:27:50.768277   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.768283   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:50.768297   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:50.768354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:50.794403   46141 cri.go:89] found id: ""
	I1202 19:27:50.794428   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.794436   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:50.794441   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:50.794504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:50.820731   46141 cri.go:89] found id: ""
	I1202 19:27:50.820745   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.820752   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:50.820757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:50.820812   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:50.852081   46141 cri.go:89] found id: ""
	I1202 19:27:50.852094   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.852101   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:50.852106   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:50.852172   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:50.879611   46141 cri.go:89] found id: ""
	I1202 19:27:50.879625   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.879631   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:50.879644   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:50.879654   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:50.906936   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:50.906951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:50.975206   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:50.975223   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:50.985872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:50.985895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:51.052846   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:51.052855   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:51.052866   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:53.628857   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:53.638710   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:53.638773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:53.662581   46141 cri.go:89] found id: ""
	I1202 19:27:53.662595   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.662602   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:53.662607   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:53.662660   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:53.687222   46141 cri.go:89] found id: ""
	I1202 19:27:53.687237   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.687244   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:53.687249   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:53.687306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:53.711983   46141 cri.go:89] found id: ""
	I1202 19:27:53.711996   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.712003   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:53.712009   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:53.712065   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:53.737377   46141 cri.go:89] found id: ""
	I1202 19:27:53.737391   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.737398   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:53.737403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:53.737456   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:53.765301   46141 cri.go:89] found id: ""
	I1202 19:27:53.765315   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.765321   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:53.765327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:53.765383   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:53.793518   46141 cri.go:89] found id: ""
	I1202 19:27:53.793531   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.793537   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:53.793542   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:53.793597   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:53.822849   46141 cri.go:89] found id: ""
	I1202 19:27:53.822863   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.822870   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:53.822877   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:53.822887   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:53.854992   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:53.855010   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:53.921075   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:53.921094   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:53.931936   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:53.931951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:53.995407   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:53.995422   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:53.995432   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.577211   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:56.588419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:56.588476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:56.617070   46141 cri.go:89] found id: ""
	I1202 19:27:56.617083   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.617090   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:56.617096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:56.617149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:56.644965   46141 cri.go:89] found id: ""
	I1202 19:27:56.644979   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.644986   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:56.644990   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:56.645050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:56.673885   46141 cri.go:89] found id: ""
	I1202 19:27:56.673899   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.673906   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:56.673911   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:56.673965   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:56.698577   46141 cri.go:89] found id: ""
	I1202 19:27:56.698590   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.698597   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:56.698603   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:56.698659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:56.727980   46141 cri.go:89] found id: ""
	I1202 19:27:56.727995   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.728001   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:56.728007   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:56.728061   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:56.752295   46141 cri.go:89] found id: ""
	I1202 19:27:56.752309   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.752316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:56.752321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:56.752378   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:56.777216   46141 cri.go:89] found id: ""
	I1202 19:27:56.777228   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.777236   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:56.777243   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:56.777254   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:56.788028   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:56.788043   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:56.868442   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:56.868452   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:56.868462   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.944462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:56.944480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:56.979950   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:56.979964   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:59.548516   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:59.558289   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:59.558346   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:59.581971   46141 cri.go:89] found id: ""
	I1202 19:27:59.581984   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.581991   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:59.581997   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:59.582054   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:59.606472   46141 cri.go:89] found id: ""
	I1202 19:27:59.606485   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.606492   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:59.606497   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:59.606551   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:59.631964   46141 cri.go:89] found id: ""
	I1202 19:27:59.631977   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.631984   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:59.631989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:59.632042   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:59.657151   46141 cri.go:89] found id: ""
	I1202 19:27:59.657164   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.657171   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:59.657177   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:59.657232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:59.683812   46141 cri.go:89] found id: ""
	I1202 19:27:59.683826   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.683834   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:59.683840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:59.683901   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:59.712800   46141 cri.go:89] found id: ""
	I1202 19:27:59.712814   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.712821   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:59.712826   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:59.712900   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:59.745829   46141 cri.go:89] found id: ""
	I1202 19:27:59.745842   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.745849   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:59.745856   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:59.745868   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:59.817077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:59.817087   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:59.817097   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:59.907455   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:59.907474   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:59.935466   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:59.935480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:00.005487   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:00.005511   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.519937   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:02.529900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:02.529967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:02.555080   46141 cri.go:89] found id: ""
	I1202 19:28:02.555093   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.555099   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:02.555105   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:02.555160   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:02.579988   46141 cri.go:89] found id: ""
	I1202 19:28:02.580002   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.580009   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:02.580015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:02.580069   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:02.604847   46141 cri.go:89] found id: ""
	I1202 19:28:02.604861   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.604868   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:02.604874   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:02.604937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:02.629805   46141 cri.go:89] found id: ""
	I1202 19:28:02.629818   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.629825   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:02.629832   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:02.629888   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:02.654310   46141 cri.go:89] found id: ""
	I1202 19:28:02.654324   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.654330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:02.654336   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:02.654393   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:02.683226   46141 cri.go:89] found id: ""
	I1202 19:28:02.683239   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.683246   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:02.683252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:02.683306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:02.707703   46141 cri.go:89] found id: ""
	I1202 19:28:02.707717   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.707724   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:02.707732   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:02.707741   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:02.783085   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:02.783103   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:02.829513   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:02.829528   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:02.903215   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:02.903231   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.914284   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:02.914302   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:02.974963   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.475826   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:05.485953   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:05.486009   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:05.512427   46141 cri.go:89] found id: ""
	I1202 19:28:05.512440   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.512447   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:05.512453   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:05.512509   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:05.536678   46141 cri.go:89] found id: ""
	I1202 19:28:05.536691   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.536698   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:05.536703   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:05.536757   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:05.561732   46141 cri.go:89] found id: ""
	I1202 19:28:05.561745   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.561752   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:05.561757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:05.561810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:05.585989   46141 cri.go:89] found id: ""
	I1202 19:28:05.586003   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.586010   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:05.586015   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:05.586073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:05.611860   46141 cri.go:89] found id: ""
	I1202 19:28:05.611891   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.611899   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:05.611904   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:05.611969   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:05.637502   46141 cri.go:89] found id: ""
	I1202 19:28:05.637516   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.637523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:05.637528   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:05.637583   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:05.662486   46141 cri.go:89] found id: ""
	I1202 19:28:05.662499   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.662506   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:05.662514   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:05.662525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:05.727597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:05.727615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:05.738294   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:05.738309   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:05.810066   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.810076   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:05.810088   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:05.892482   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:05.892506   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:08.423125   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:08.433033   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:08.433090   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:08.458175   46141 cri.go:89] found id: ""
	I1202 19:28:08.458189   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.458195   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:08.458201   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:08.458257   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:08.483893   46141 cri.go:89] found id: ""
	I1202 19:28:08.483906   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.483913   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:08.483918   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:08.483974   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:08.507923   46141 cri.go:89] found id: ""
	I1202 19:28:08.507937   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.507953   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:08.507964   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:08.508081   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:08.537015   46141 cri.go:89] found id: ""
	I1202 19:28:08.537030   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.537041   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:08.537046   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:08.537102   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:08.562386   46141 cri.go:89] found id: ""
	I1202 19:28:08.562399   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.562405   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:08.562410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:08.562464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:08.589367   46141 cri.go:89] found id: ""
	I1202 19:28:08.589380   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.589387   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:08.589392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:08.589446   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:08.614763   46141 cri.go:89] found id: ""
	I1202 19:28:08.614776   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.614782   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:08.614790   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:08.614806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:08.680003   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:08.680020   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:08.691092   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:08.691108   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:08.758435   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:08.758444   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:08.758455   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:08.838206   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:08.838225   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.377402   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:11.387381   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:11.387443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:11.416000   46141 cri.go:89] found id: ""
	I1202 19:28:11.416013   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.416020   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:11.416025   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:11.416086   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:11.440887   46141 cri.go:89] found id: ""
	I1202 19:28:11.440900   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.440907   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:11.440913   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:11.440980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:11.469507   46141 cri.go:89] found id: ""
	I1202 19:28:11.469520   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.469527   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:11.469533   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:11.469589   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:11.494304   46141 cri.go:89] found id: ""
	I1202 19:28:11.494324   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.494331   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:11.494337   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:11.494395   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:11.519823   46141 cri.go:89] found id: ""
	I1202 19:28:11.519836   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.519843   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:11.519848   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:11.519905   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:11.544959   46141 cri.go:89] found id: ""
	I1202 19:28:11.544972   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.544980   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:11.544985   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:11.545043   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:11.569409   46141 cri.go:89] found id: ""
	I1202 19:28:11.569422   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.569429   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:11.569437   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:11.569449   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.605867   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:11.605883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:11.672817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:11.672835   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:11.683920   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:11.683937   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:11.748483   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:11.748494   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:11.748505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:14.328100   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:14.338319   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:14.338385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:14.368273   46141 cri.go:89] found id: ""
	I1202 19:28:14.368287   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.368293   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:14.368299   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:14.368353   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:14.393695   46141 cri.go:89] found id: ""
	I1202 19:28:14.393708   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.393715   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:14.393720   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:14.393778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:14.419532   46141 cri.go:89] found id: ""
	I1202 19:28:14.419546   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.419552   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:14.419558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:14.419611   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:14.444792   46141 cri.go:89] found id: ""
	I1202 19:28:14.444806   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.444812   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:14.444818   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:14.444874   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:14.473002   46141 cri.go:89] found id: ""
	I1202 19:28:14.473015   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.473022   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:14.473027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:14.473082   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:14.500557   46141 cri.go:89] found id: ""
	I1202 19:28:14.500570   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.500577   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:14.500583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:14.500639   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:14.531570   46141 cri.go:89] found id: ""
	I1202 19:28:14.531583   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.531591   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:14.531598   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:14.531608   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:14.563367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:14.563385   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:14.629330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:14.629348   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:14.640467   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:14.640482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:14.703192   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:14.703201   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:14.703212   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.280934   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:17.290754   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:17.290816   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:17.315632   46141 cri.go:89] found id: ""
	I1202 19:28:17.315645   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.315652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:17.315657   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:17.315715   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:17.339240   46141 cri.go:89] found id: ""
	I1202 19:28:17.339256   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.339281   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:17.339304   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:17.339361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:17.362387   46141 cri.go:89] found id: ""
	I1202 19:28:17.362401   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.362408   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:17.362415   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:17.362471   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:17.388183   46141 cri.go:89] found id: ""
	I1202 19:28:17.388197   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.388204   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:17.388209   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:17.388264   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:17.417561   46141 cri.go:89] found id: ""
	I1202 19:28:17.417575   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.417582   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:17.417588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:17.417643   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:17.441561   46141 cri.go:89] found id: ""
	I1202 19:28:17.441574   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.441581   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:17.441596   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:17.441678   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:17.467464   46141 cri.go:89] found id: ""
	I1202 19:28:17.467477   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.467483   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:17.467491   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:17.467501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.543368   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:17.543386   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:17.574792   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:17.574807   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:17.641345   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:17.641363   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:17.651872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:17.651892   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:17.719233   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.219437   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:20.229376   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:20.229437   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:20.254960   46141 cri.go:89] found id: ""
	I1202 19:28:20.254973   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.254980   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:20.254985   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:20.255048   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:20.280663   46141 cri.go:89] found id: ""
	I1202 19:28:20.280676   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.280683   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:20.280688   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:20.280744   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:20.309275   46141 cri.go:89] found id: ""
	I1202 19:28:20.309288   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.309295   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:20.309300   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:20.309354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:20.334255   46141 cri.go:89] found id: ""
	I1202 19:28:20.334268   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.334275   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:20.334281   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:20.334334   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:20.359290   46141 cri.go:89] found id: ""
	I1202 19:28:20.359303   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.359310   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:20.359330   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:20.359385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:20.387906   46141 cri.go:89] found id: ""
	I1202 19:28:20.387919   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.387931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:20.387937   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:20.387995   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:20.412377   46141 cri.go:89] found id: ""
	I1202 19:28:20.412391   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.412398   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:20.412406   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:20.412421   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:20.478975   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:20.478994   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:20.491271   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:20.491286   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:20.559186   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.559197   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:20.559208   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:20.635117   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:20.635135   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:23.163845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:23.174025   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:23.174084   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:23.198952   46141 cri.go:89] found id: ""
	I1202 19:28:23.198965   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.198972   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:23.198977   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:23.199040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:23.227109   46141 cri.go:89] found id: ""
	I1202 19:28:23.227122   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.227128   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:23.227133   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:23.227194   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:23.252085   46141 cri.go:89] found id: ""
	I1202 19:28:23.252099   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.252106   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:23.252111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:23.252178   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:23.282041   46141 cri.go:89] found id: ""
	I1202 19:28:23.282054   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.282061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:23.282066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:23.282120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:23.306149   46141 cri.go:89] found id: ""
	I1202 19:28:23.306163   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.306170   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:23.306176   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:23.306231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:23.330130   46141 cri.go:89] found id: ""
	I1202 19:28:23.330143   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.330158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:23.330165   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:23.330232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:23.354289   46141 cri.go:89] found id: ""
	I1202 19:28:23.354303   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.354309   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:23.354317   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:23.354327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:23.421463   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:23.421481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:23.432425   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:23.432442   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:23.499162   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:23.499185   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:23.499198   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:23.574769   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:23.574787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.102251   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:26.112999   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:26.113059   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:26.139511   46141 cri.go:89] found id: ""
	I1202 19:28:26.139527   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.139534   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:26.139539   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:26.139595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:26.163810   46141 cri.go:89] found id: ""
	I1202 19:28:26.163823   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.163830   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:26.163845   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:26.163903   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:26.195678   46141 cri.go:89] found id: ""
	I1202 19:28:26.195691   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.195716   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:26.195721   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:26.195784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:26.221498   46141 cri.go:89] found id: ""
	I1202 19:28:26.221512   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.221519   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:26.221524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:26.221591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:26.246377   46141 cri.go:89] found id: ""
	I1202 19:28:26.246391   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.246397   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:26.246402   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:26.246464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:26.270652   46141 cri.go:89] found id: ""
	I1202 19:28:26.270665   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.270673   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:26.270678   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:26.270763   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:26.296694   46141 cri.go:89] found id: ""
	I1202 19:28:26.296707   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.296714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:26.296722   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:26.296735   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:26.371620   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:26.371631   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:26.371641   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:26.451711   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:26.451734   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.483175   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:26.483191   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:26.549681   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:26.549701   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.061808   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:29.072772   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:29.072827   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:29.101985   46141 cri.go:89] found id: ""
	I1202 19:28:29.101999   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.102006   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:29.102013   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:29.102074   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:29.128784   46141 cri.go:89] found id: ""
	I1202 19:28:29.128797   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.128803   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:29.128808   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:29.128862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:29.156726   46141 cri.go:89] found id: ""
	I1202 19:28:29.156740   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.156747   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:29.156753   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:29.156810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:29.186146   46141 cri.go:89] found id: ""
	I1202 19:28:29.186159   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.186167   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:29.186173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:29.186230   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:29.210367   46141 cri.go:89] found id: ""
	I1202 19:28:29.210381   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.210387   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:29.210392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:29.210448   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:29.234607   46141 cri.go:89] found id: ""
	I1202 19:28:29.234620   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.234635   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:29.234641   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:29.234695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:29.260124   46141 cri.go:89] found id: ""
	I1202 19:28:29.260137   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.260144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:29.260151   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:29.260161   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.270869   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:29.270885   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:29.335425   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:29.335435   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:29.335448   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:29.416026   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:29.416053   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:29.444738   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:29.444757   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:32.015450   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:32.028692   46141 kubeadm.go:602] duration metric: took 4m2.303606504s to restartPrimaryControlPlane
	W1202 19:28:32.028752   46141 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 19:28:32.028882   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:28:32.448460   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:28:32.461105   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:28:32.468953   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:28:32.469018   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:28:32.476620   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:28:32.476629   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:28:32.476680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:28:32.484342   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:28:32.484396   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:28:32.491816   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:28:32.499468   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:28:32.499526   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:28:32.506680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.513998   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:28:32.514056   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.521915   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:28:32.529746   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:28:32.529813   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:28:32.537427   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:28:32.575514   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:28:32.575563   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:28:32.649801   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:28:32.649866   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:28:32.649900   46141 kubeadm.go:319] OS: Linux
	I1202 19:28:32.649943   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:28:32.649990   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:28:32.650036   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:28:32.650083   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:28:32.650129   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:28:32.650176   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:28:32.650220   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:28:32.650266   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:28:32.650311   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:28:32.711361   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:28:32.711478   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:28:32.711574   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:28:32.719716   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:28:32.725408   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:28:32.725506   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:28:32.725580   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:28:32.725675   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:28:32.725741   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:28:32.725818   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:28:32.725877   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:28:32.725939   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:28:32.726006   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:28:32.726085   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:28:32.726169   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:28:32.726206   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:28:32.726266   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:28:32.962990   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:28:33.139589   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:28:33.816592   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:28:34.040085   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:28:34.279545   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:28:34.280074   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:28:34.282763   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:28:34.285708   46141 out.go:252]   - Booting up control plane ...
	I1202 19:28:34.285809   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:28:34.285891   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:28:34.288012   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:28:34.303407   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:28:34.303530   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:28:34.311292   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:28:34.311561   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:28:34.311687   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:28:34.441389   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:28:34.442903   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:32:34.442631   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443729s
	I1202 19:32:34.442655   46141 kubeadm.go:319] 
	I1202 19:32:34.442716   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:32:34.442751   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:32:34.442868   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:32:34.442876   46141 kubeadm.go:319] 
	I1202 19:32:34.443019   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:32:34.443050   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:32:34.443105   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:32:34.443119   46141 kubeadm.go:319] 
	I1202 19:32:34.446600   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:32:34.447010   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:32:34.447116   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:32:34.447358   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:32:34.447364   46141 kubeadm.go:319] 
	I1202 19:32:34.447431   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 19:32:34.447530   46141 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443729s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 19:32:34.447615   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:32:34.857158   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:32:34.869767   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:32:34.869822   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:32:34.877453   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:32:34.877463   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:32:34.877520   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:32:34.885001   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:32:34.885057   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:32:34.892315   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:32:34.899801   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:32:34.899854   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:32:34.907104   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.914843   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:32:34.914905   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.922357   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:32:34.930005   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:32:34.930062   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:32:34.937883   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:32:34.977710   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:32:34.977941   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:32:35.052803   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:32:35.052872   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:32:35.052916   46141 kubeadm.go:319] OS: Linux
	I1202 19:32:35.052967   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:32:35.053025   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:32:35.053081   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:32:35.053132   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:32:35.053189   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:32:35.053247   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:32:35.053296   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:32:35.053361   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:32:35.053405   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:32:35.129057   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:32:35.129160   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:32:35.129249   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:32:35.136437   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:32:35.141766   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:32:35.141858   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:32:35.141951   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:32:35.142045   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:32:35.142120   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:32:35.142195   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:32:35.142254   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:32:35.142330   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:32:35.142391   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:32:35.142465   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:32:35.142537   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:32:35.142573   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:32:35.142628   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:32:35.719108   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:32:35.855328   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:32:36.315829   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:32:36.611755   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:32:36.762758   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:32:36.763311   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:32:36.766390   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:32:36.769564   46141 out.go:252]   - Booting up control plane ...
	I1202 19:32:36.769677   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:32:36.769754   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:32:36.771251   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:32:36.785826   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:32:36.785928   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:32:36.793103   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:32:36.793426   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:32:36.793594   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:32:36.913663   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:32:36.913775   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:36:36.914797   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001215513s
	I1202 19:36:36.914820   46141 kubeadm.go:319] 
	I1202 19:36:36.914918   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:36:36.915114   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:36:36.915295   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:36:36.915303   46141 kubeadm.go:319] 
	I1202 19:36:36.915482   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:36:36.915772   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:36:36.915825   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:36:36.915828   46141 kubeadm.go:319] 
	I1202 19:36:36.923850   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:36:36.924318   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:36:36.924432   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:36:36.924695   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:36:36.924703   46141 kubeadm.go:319] 
	I1202 19:36:36.924833   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 19:36:36.924858   46141 kubeadm.go:403] duration metric: took 12m7.236978439s to StartCluster
	I1202 19:36:36.924902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:36:36.924959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:36:36.952746   46141 cri.go:89] found id: ""
	I1202 19:36:36.952760   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.952767   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:36:36.952772   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:36:36.952828   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:36:36.977200   46141 cri.go:89] found id: ""
	I1202 19:36:36.977214   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.977221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:36:36.977226   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:36:36.977291   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:36:37.002232   46141 cri.go:89] found id: ""
	I1202 19:36:37.002246   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.002253   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:36:37.002258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:36:37.002321   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:36:37.051601   46141 cri.go:89] found id: ""
	I1202 19:36:37.051615   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.051621   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:36:37.051626   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:36:37.051681   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:36:37.102950   46141 cri.go:89] found id: ""
	I1202 19:36:37.102976   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.102983   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:36:37.102988   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:36:37.103051   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:36:37.131342   46141 cri.go:89] found id: ""
	I1202 19:36:37.131355   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.131362   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:36:37.131368   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:36:37.131423   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:36:37.159192   46141 cri.go:89] found id: ""
	I1202 19:36:37.159206   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.159213   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:36:37.159221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:36:37.159234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:36:37.170095   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:36:37.170110   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:36:37.234222   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:36:37.234232   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:36:37.234242   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:36:37.306216   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:36:37.306234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:36:37.334163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:36:37.334178   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 19:36:37.399997   46141 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 19:36:37.400040   46141 out.go:285] * 
	W1202 19:36:37.400110   46141 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.400129   46141 out.go:285] * 
	W1202 19:36:37.402271   46141 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:36:37.407816   46141 out.go:203] 
	W1202 19:36:37.411562   46141 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.411641   46141 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 19:36:37.411664   46141 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 19:36:37.415811   46141 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546654939Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546834414Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546950457Z" level=info msg="Create NRI interface"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.5471107Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547130474Z" level=info msg="runtime interface created"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.54714466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547151634Z" level=info msg="runtime interface starting up..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547157616Z" level=info msg="starting plugins..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547170686Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547251727Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:24:28 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.715009926Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bc19958f-d803-4cd2-a545-4f6c118c1f40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716039792Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=97921bbe-b2e3-494c-be19-702e5072b6db name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716591601Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=702ce713-4736-4f82-bd4c-9fc9629fcb4d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717128034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5900f7cc-9a33-4e7a-8a73-829e63e64047 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717627973Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0a3735ac-393a-45fe-a0d5-34b181ae2dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718273997Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4854b9da-7f98-4e1b-9a6a-97fc85aeb622 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718754056Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f046502e-805f-4087-97ee-276ea86f9117 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.132448562Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bfb0729f-fcf5-4cf1-8661-79e44060815d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133109196Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=59868b2f-ef1f-42db-9580-1c52177e5173 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133599056Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0dadf3fc-12a7-405c-8560-5fb835ac24e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134131974Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f3eedcce-a194-4413-8ad5-a61c4ca64183 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134584067Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0d9672a7-dea9-4cd7-b618-4662ee6fbedc name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135094472Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=61806fbf-e06a-40e0-ab81-3632b0f3ac8c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135559257Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e966dc55-aa48-4909-b2a5-1769d8bd5c4c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:38:45.989229   23901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:45.989596   23901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:45.991266   23901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:45.991576   23901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:45.993119   23901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:38:46 up  1:21,  0 user,  load average: 0.58, 0.35, 0.32
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:38:43 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:43 functional-374330 kubelet[23746]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:43 functional-374330 kubelet[23746]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:43 functional-374330 kubelet[23746]: E1202 19:38:43.813858   23746 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:43 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:43 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:44 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 02 19:38:44 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:44 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:44 functional-374330 kubelet[23789]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:44 functional-374330 kubelet[23789]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:44 functional-374330 kubelet[23789]: E1202 19:38:44.596348   23789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:44 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:44 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:45 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 02 19:38:45 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:45 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:45 functional-374330 kubelet[23817]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:45 functional-374330 kubelet[23817]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:45 functional-374330 kubelet[23817]: E1202 19:38:45.373410   23817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:45 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:45 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:46 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 02 19:38:46 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:46 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (339.633432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-374330 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-374330 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (59.69126ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-374330 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-374330 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-374330 describe po hello-node-connect: exit status 1 (66.466313ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-374330 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-374330 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-374330 logs -l app=hello-node-connect: exit status 1 (57.749201ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-374330 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-374330 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-374330 describe svc hello-node-connect: exit status 1 (56.46089ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-374330 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (312.764349ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-374330 cache reload                                                                                                                               │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ ssh     │ functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │ 02 Dec 25 19:24 UTC │
	│ kubectl │ functional-374330 kubectl -- --context functional-374330 get pods                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ start   │ -p functional-374330 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:24 UTC │                     │
	│ cp      │ functional-374330 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ config  │ functional-374330 config unset cpus                                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ config  │ functional-374330 config get cpus                                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │                     │
	│ config  │ functional-374330 config set cpus 2                                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ config  │ functional-374330 config get cpus                                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ config  │ functional-374330 config unset cpus                                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ ssh     │ functional-374330 ssh -n functional-374330 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ config  │ functional-374330 config get cpus                                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │                     │
	│ ssh     │ functional-374330 ssh echo hello                                                                                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ cp      │ functional-374330 cp functional-374330:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2647085349/001/cp-test.txt │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ ssh     │ functional-374330 ssh cat /etc/hostname                                                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ ssh     │ functional-374330 ssh -n functional-374330 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ tunnel  │ functional-374330 tunnel --alsologtostderr                                                                                                                   │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │                     │
	│ tunnel  │ functional-374330 tunnel --alsologtostderr                                                                                                                   │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │                     │
	│ cp      │ functional-374330 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ tunnel  │ functional-374330 tunnel --alsologtostderr                                                                                                                   │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │                     │
	│ ssh     │ functional-374330 ssh -n functional-374330 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:36 UTC │ 02 Dec 25 19:36 UTC │
	│ addons  │ functional-374330 addons list                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ addons  │ functional-374330 addons list -o json                                                                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:24:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:24:25.235145   46141 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:24:25.235262   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235266   46141 out.go:374] Setting ErrFile to fd 2...
	I1202 19:24:25.235270   46141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:24:25.235501   46141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:24:25.235832   46141 out.go:368] Setting JSON to false
	I1202 19:24:25.236657   46141 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4004,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:24:25.236712   46141 start.go:143] virtualization:  
	I1202 19:24:25.240137   46141 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:24:25.243026   46141 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:24:25.243116   46141 notify.go:221] Checking for updates...
	I1202 19:24:25.249453   46141 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:24:25.252235   46141 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:24:25.255042   46141 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:24:25.257985   46141 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:24:25.260839   46141 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:24:25.264178   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:25.264323   46141 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:24:25.284942   46141 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:24:25.285038   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.377890   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.369067605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.377983   46141 docker.go:319] overlay module found
	I1202 19:24:25.380979   46141 out.go:179] * Using the docker driver based on existing profile
	I1202 19:24:25.383947   46141 start.go:309] selected driver: docker
	I1202 19:24:25.383955   46141 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.384041   46141 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:24:25.384143   46141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:24:25.448724   46141 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 19:24:25.440009169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:24:25.449135   46141 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:24:25.449156   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:25.449204   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:25.449250   46141 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:25.452291   46141 out.go:179] * Starting "functional-374330" primary control-plane node in "functional-374330" cluster
	I1202 19:24:25.455020   46141 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:24:25.457907   46141 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:24:25.460700   46141 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:24:25.460741   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:25.479854   46141 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:24:25.479865   46141 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 19:24:25.525268   46141 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 19:24:25.722344   46141 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 19:24:25.722516   46141 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/config.json ...
	I1202 19:24:25.722575   46141 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722662   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 19:24:25.722674   46141 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.293µs
	I1202 19:24:25.722687   46141 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 19:24:25.722699   46141 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722728   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 19:24:25.722732   46141 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 34.97µs
	I1202 19:24:25.722737   46141 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722755   46141 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722765   46141 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:24:25.722787   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 19:24:25.722792   46141 start.go:360] acquireMachinesLock for functional-374330: {Name:mk7c3c3fd8194ecd6da810be414b92299700fc27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722800   46141 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 43.388µs
	I1202 19:24:25.722806   46141 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722816   46141 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722833   46141 start.go:364] duration metric: took 28.102µs to acquireMachinesLock for "functional-374330"
	I1202 19:24:25.722844   46141 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:24:25.722848   46141 fix.go:54] fixHost starting: 
	I1202 19:24:25.722868   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 19:24:25.722874   46141 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 59.51µs
	I1202 19:24:25.722879   46141 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722888   46141 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722914   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 19:24:25.722918   46141 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.859µs
	I1202 19:24:25.722926   46141 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 19:24:25.722934   46141 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.722961   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 19:24:25.722965   46141 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.041µs
	I1202 19:24:25.722969   46141 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 19:24:25.722984   46141 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723013   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 19:24:25.723018   46141 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.477µs
	I1202 19:24:25.723022   46141 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 19:24:25.723030   46141 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:24:25.723054   46141 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 19:24:25.723058   46141 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.956µs
	I1202 19:24:25.723062   46141 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 19:24:25.723069   46141 cache.go:87] Successfully saved all images to host disk.
	I1202 19:24:25.723135   46141 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:24:25.740024   46141 fix.go:112] recreateIfNeeded on functional-374330: state=Running err=<nil>
	W1202 19:24:25.740043   46141 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:24:25.743422   46141 out.go:252] * Updating the running docker "functional-374330" container ...
	I1202 19:24:25.743444   46141 machine.go:94] provisionDockerMachine start ...
	I1202 19:24:25.743520   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.759952   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.760267   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.760274   46141 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:24:25.913242   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:25.913255   46141 ubuntu.go:182] provisioning hostname "functional-374330"
	I1202 19:24:25.913315   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:25.930816   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:25.931108   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:25.931116   46141 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-374330 && echo "functional-374330" | sudo tee /etc/hostname
	I1202 19:24:26.092717   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-374330
	
	I1202 19:24:26.092791   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.112703   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.112993   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.113006   46141 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-374330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-374330/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-374330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:24:26.261761   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:24:26.261776   46141 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:24:26.261797   46141 ubuntu.go:190] setting up certificates
	I1202 19:24:26.261807   46141 provision.go:84] configureAuth start
	I1202 19:24:26.261862   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:26.279208   46141 provision.go:143] copyHostCerts
	I1202 19:24:26.279270   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:24:26.279282   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:24:26.279355   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:24:26.279450   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:24:26.279454   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:24:26.279478   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:24:26.279560   46141 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:24:26.279563   46141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:24:26.279586   46141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:24:26.279633   46141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.functional-374330 san=[127.0.0.1 192.168.49.2 functional-374330 localhost minikube]
	I1202 19:24:26.509539   46141 provision.go:177] copyRemoteCerts
	I1202 19:24:26.509599   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:24:26.509644   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.526423   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:26.629290   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:24:26.645497   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 19:24:26.662152   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:24:26.678745   46141 provision.go:87] duration metric: took 416.916855ms to configureAuth
	I1202 19:24:26.678762   46141 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:24:26.678944   46141 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:24:26.679035   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:26.696214   46141 main.go:143] libmachine: Using SSH client type: native
	I1202 19:24:26.696565   46141 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1202 19:24:26.696576   46141 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:24:27.030556   46141 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:24:27.030570   46141 machine.go:97] duration metric: took 1.287120124s to provisionDockerMachine
	I1202 19:24:27.030580   46141 start.go:293] postStartSetup for "functional-374330" (driver="docker")
	I1202 19:24:27.030591   46141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:24:27.030695   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:24:27.030734   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.047988   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.153876   46141 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:24:27.157492   46141 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:24:27.157509   46141 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:24:27.157519   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:24:27.157573   46141 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:24:27.157644   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:24:27.157766   46141 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts -> hosts in /etc/test/nested/copy/4470
	I1202 19:24:27.157814   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4470
	I1202 19:24:27.165310   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:27.182588   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts --> /etc/test/nested/copy/4470/hosts (40 bytes)
	I1202 19:24:27.199652   46141 start.go:296] duration metric: took 169.058439ms for postStartSetup
	I1202 19:24:27.199721   46141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:24:27.199772   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.216431   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.322237   46141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:24:27.326538   46141 fix.go:56] duration metric: took 1.603683597s for fixHost
	I1202 19:24:27.326551   46141 start.go:83] releasing machines lock for "functional-374330", held for 1.603712807s
	I1202 19:24:27.326613   46141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-374330
	I1202 19:24:27.342449   46141 ssh_runner.go:195] Run: cat /version.json
	I1202 19:24:27.342488   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.342715   46141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:24:27.342781   46141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:24:27.364991   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.373848   46141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:24:27.555572   46141 ssh_runner.go:195] Run: systemctl --version
	I1202 19:24:27.562641   46141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:24:27.610413   46141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:24:27.614481   46141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:24:27.614543   46141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:24:27.622250   46141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:24:27.622263   46141 start.go:496] detecting cgroup driver to use...
	I1202 19:24:27.622291   46141 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:24:27.622334   46141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:24:27.637407   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:24:27.650559   46141 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:24:27.650610   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:24:27.665862   46141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:24:27.678201   46141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:24:27.787007   46141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:24:27.899090   46141 docker.go:234] disabling docker service ...
	I1202 19:24:27.899177   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:24:27.914485   46141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:24:27.927681   46141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:24:28.045412   46141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:24:28.177124   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:24:28.189334   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:24:28.202961   46141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:24:28.203015   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.211343   46141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:24:28.211423   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.219933   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.227929   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.236036   46141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:24:28.243301   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.251359   46141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.259074   46141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:24:28.267235   46141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:24:28.274309   46141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:24:28.280789   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.409376   46141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:24:28.552601   46141 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:24:28.552676   46141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:24:28.556545   46141 start.go:564] Will wait 60s for crictl version
	I1202 19:24:28.556594   46141 ssh_runner.go:195] Run: which crictl
	I1202 19:24:28.560016   46141 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:24:28.584096   46141 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:24:28.584179   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.612035   46141 ssh_runner.go:195] Run: crio --version
	I1202 19:24:28.644724   46141 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 19:24:28.647719   46141 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:24:28.663830   46141 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:24:28.670469   46141 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 19:24:28.673257   46141 kubeadm.go:884] updating cluster {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:24:28.673378   46141 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 19:24:28.673715   46141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:24:28.712979   46141 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:24:28.712990   46141 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:24:28.712996   46141 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1202 19:24:28.713091   46141 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-374330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:24:28.713167   46141 ssh_runner.go:195] Run: crio config
	I1202 19:24:28.766896   46141 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 19:24:28.766918   46141 cni.go:84] Creating CNI manager for ""
	I1202 19:24:28.766927   46141 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:24:28.766941   46141 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:24:28.766963   46141 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-374330 NodeName:functional-374330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:24:28.767080   46141 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-374330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:24:28.767147   46141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 19:24:28.774515   46141 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:24:28.774573   46141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:24:28.781818   46141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1202 19:24:28.793879   46141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 19:24:28.805690   46141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1202 19:24:28.818120   46141 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 19:24:28.821584   46141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:24:28.923612   46141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:24:29.044163   46141 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330 for IP: 192.168.49.2
	I1202 19:24:29.044174   46141 certs.go:195] generating shared ca certs ...
	I1202 19:24:29.044188   46141 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:24:29.044325   46141 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:24:29.044362   46141 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:24:29.044367   46141 certs.go:257] generating profile certs ...
	I1202 19:24:29.044449   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.key
	I1202 19:24:29.044505   46141 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key.b350056b
	I1202 19:24:29.044543   46141 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key
	I1202 19:24:29.044646   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:24:29.044677   46141 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:24:29.044683   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:24:29.044708   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:24:29.044730   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:24:29.044752   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:24:29.044793   46141 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:24:29.045393   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:24:29.065539   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:24:29.085818   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:24:29.107933   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:24:29.124745   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 19:24:29.141714   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:24:29.158359   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:24:29.174925   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:24:29.191660   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:24:29.208637   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:24:29.226113   46141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:24:29.242250   46141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:24:29.254421   46141 ssh_runner.go:195] Run: openssl version
	I1202 19:24:29.260244   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:24:29.267946   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271417   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.271472   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:24:29.312066   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:24:29.319673   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:24:29.327613   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331149   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.331213   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:24:29.371529   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:24:29.378966   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:24:29.386811   46141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390484   46141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.390535   46141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:24:29.430996   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:24:29.438578   46141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:24:29.442282   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:24:29.482760   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:24:29.523856   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:24:29.564389   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:24:29.604810   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:24:29.645380   46141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:24:29.687886   46141 kubeadm.go:401] StartCluster: {Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:24:29.687963   46141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:24:29.688021   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.717432   46141 cri.go:89] found id: ""
	I1202 19:24:29.717490   46141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:24:29.725067   46141 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:24:29.725077   46141 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:24:29.725126   46141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:24:29.732065   46141 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.732614   46141 kubeconfig.go:125] found "functional-374330" server: "https://192.168.49.2:8441"
	I1202 19:24:29.734000   46141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:24:29.741333   46141 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 19:09:53.796915722 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 19:24:28.810106590 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 19:24:29.741350   46141 kubeadm.go:1161] stopping kube-system containers ...
	I1202 19:24:29.741369   46141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 19:24:29.741422   46141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:24:29.768496   46141 cri.go:89] found id: ""
	I1202 19:24:29.768555   46141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 19:24:29.784309   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:24:29.792418   46141 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  2 19:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 19:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  2 19:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 19:14 /etc/kubernetes/scheduler.conf
	
	I1202 19:24:29.792472   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:24:29.800190   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:24:29.807339   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.807391   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:24:29.814250   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.821376   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.821427   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:24:29.828870   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:24:29.836580   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:24:29.836638   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:24:29.843919   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:24:29.851701   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:29.899912   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.003595   46141 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.103659313s)
	I1202 19:24:31.003654   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.210419   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.280327   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 19:24:31.324104   46141 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:24:31.324170   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:31.824388   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.324845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:32.825182   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:33.824654   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.325193   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:34.825112   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.324714   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:35.824303   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.324356   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:36.824683   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.324294   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:37.824358   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.324922   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:38.824376   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.324270   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:39.825008   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.324553   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:40.824838   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.325254   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:41.824311   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.324452   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:42.824362   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.325153   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:43.824379   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.324948   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:44.824287   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.325093   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:45.824914   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.324315   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:46.825135   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.324688   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:47.824319   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.325046   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:48.824341   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.324306   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:49.824985   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.324502   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:50.825062   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.325159   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:51.824329   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.324431   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:52.824365   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.324584   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:53.824229   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.324898   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:54.825268   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.324621   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:55.824623   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.325215   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:56.824326   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.324724   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:57.824643   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.325213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:58.824317   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.324263   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:24:59.824993   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.324689   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:00.824372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.324768   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:01.824973   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.324385   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:02.824324   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.325090   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:03.824792   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:04.825092   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.324727   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:05.825067   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.325261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:06.824374   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.324258   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:07.825117   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.324373   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:08.824931   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:09.824858   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.324555   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:10.824370   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.324369   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:11.824824   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.325272   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:12.824975   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.324579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:13.824349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.324992   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:14.824471   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.325189   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:15.824307   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.324299   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:16.824860   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.324477   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:17.824853   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.324910   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:18.825002   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.324312   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:19.824665   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.324238   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:20.824261   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.325216   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:21.824750   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.324310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:22.825285   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.325114   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:23.824701   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.324390   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:24.825161   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.325162   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:25.824364   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.324725   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:26.825185   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.324377   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:27.825213   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.324403   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:28.824310   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.324960   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:29.824818   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.325151   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:30.824591   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:31.324373   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:31.324449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:31.353616   46141 cri.go:89] found id: ""
	I1202 19:25:31.353629   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.353636   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:31.353642   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:31.353718   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:31.378636   46141 cri.go:89] found id: ""
	I1202 19:25:31.378649   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.378656   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:31.378661   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:31.378716   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:31.403292   46141 cri.go:89] found id: ""
	I1202 19:25:31.403305   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.403312   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:31.403317   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:31.403371   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:31.427054   46141 cri.go:89] found id: ""
	I1202 19:25:31.427067   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.427074   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:31.427079   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:31.427133   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:31.451516   46141 cri.go:89] found id: ""
	I1202 19:25:31.451529   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.451536   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:31.451541   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:31.451595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:31.474863   46141 cri.go:89] found id: ""
	I1202 19:25:31.474876   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.474889   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:31.474895   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:31.474967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:31.499414   46141 cri.go:89] found id: ""
	I1202 19:25:31.499427   46141 logs.go:282] 0 containers: []
	W1202 19:25:31.499434   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:31.499442   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:31.499454   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:31.563997   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:31.564014   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:31.575066   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:31.575080   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:31.644130   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:31.635359   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.636084   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.637852   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.638523   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:31.640440   11621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:31.644152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:31.644164   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:31.720566   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:31.720584   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:34.247873   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:34.257765   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:34.257820   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:34.284109   46141 cri.go:89] found id: ""
	I1202 19:25:34.284122   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.284129   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:34.284134   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:34.284185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:34.322934   46141 cri.go:89] found id: ""
	I1202 19:25:34.322947   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.322954   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:34.322959   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:34.323011   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:34.356765   46141 cri.go:89] found id: ""
	I1202 19:25:34.356778   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.356785   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:34.356790   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:34.356843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:34.383799   46141 cri.go:89] found id: ""
	I1202 19:25:34.383811   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.383818   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:34.383824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:34.383875   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:34.407104   46141 cri.go:89] found id: ""
	I1202 19:25:34.407117   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.407133   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:34.407139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:34.407207   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:34.431504   46141 cri.go:89] found id: ""
	I1202 19:25:34.431517   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.431523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:34.431529   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:34.431624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:34.459463   46141 cri.go:89] found id: ""
	I1202 19:25:34.459477   46141 logs.go:282] 0 containers: []
	W1202 19:25:34.459484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:34.459492   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:34.459503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:34.524752   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:34.524770   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:34.537010   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:34.537025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:34.599686   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:34.591441   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.592120   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.593844   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.594367   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:34.596057   11725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:34.599696   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:34.599708   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:34.676464   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:34.676483   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.209911   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:37.219636   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:37.219691   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:37.243765   46141 cri.go:89] found id: ""
	I1202 19:25:37.243778   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.243785   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:37.243790   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:37.243842   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:37.272015   46141 cri.go:89] found id: ""
	I1202 19:25:37.272028   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.272035   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:37.272040   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:37.272096   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:37.296807   46141 cri.go:89] found id: ""
	I1202 19:25:37.296819   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.296835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:37.296840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:37.296893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:37.327436   46141 cri.go:89] found id: ""
	I1202 19:25:37.327449   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.327456   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:37.327461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:37.327515   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:37.362906   46141 cri.go:89] found id: ""
	I1202 19:25:37.362919   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.362926   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:37.362931   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:37.362985   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:37.386876   46141 cri.go:89] found id: ""
	I1202 19:25:37.386889   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.386896   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:37.386902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:37.386976   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:37.410131   46141 cri.go:89] found id: ""
	I1202 19:25:37.410144   46141 logs.go:282] 0 containers: []
	W1202 19:25:37.410151   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:37.410158   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:37.410169   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:37.420302   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:37.420317   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:37.483848   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:37.476112   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477214   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.477968   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.478882   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:37.480477   11829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:37.483857   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:37.483867   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:37.562871   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:37.562889   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:37.593595   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:37.593609   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.162349   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:40.172453   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:40.172514   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:40.199726   46141 cri.go:89] found id: ""
	I1202 19:25:40.199756   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.199763   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:40.199768   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:40.199825   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:40.229015   46141 cri.go:89] found id: ""
	I1202 19:25:40.229029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.229037   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:40.229042   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:40.229097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:40.255016   46141 cri.go:89] found id: ""
	I1202 19:25:40.255029   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.255036   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:40.255041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:40.255104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:40.280314   46141 cri.go:89] found id: ""
	I1202 19:25:40.280337   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.280343   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:40.280349   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:40.280409   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:40.317261   46141 cri.go:89] found id: ""
	I1202 19:25:40.317275   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.317281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:40.317286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:40.317351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:40.350568   46141 cri.go:89] found id: ""
	I1202 19:25:40.350581   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.350588   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:40.350602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:40.350655   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:40.376758   46141 cri.go:89] found id: ""
	I1202 19:25:40.376772   46141 logs.go:282] 0 containers: []
	W1202 19:25:40.376786   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:40.376794   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:40.376805   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:40.452695   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:40.452719   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:40.478860   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:40.478875   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:40.558280   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:40.558307   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:40.569138   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:40.569159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:40.633967   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:40.626503   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.627229   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.628749   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.629139   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:40.630662   11949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.135632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:43.145532   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:43.145592   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:43.170325   46141 cri.go:89] found id: ""
	I1202 19:25:43.170338   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.170345   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:43.170372   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:43.170432   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:43.194956   46141 cri.go:89] found id: ""
	I1202 19:25:43.194970   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.194977   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:43.194982   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:43.195039   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:43.221778   46141 cri.go:89] found id: ""
	I1202 19:25:43.221792   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.221800   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:43.221805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:43.221862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:43.248205   46141 cri.go:89] found id: ""
	I1202 19:25:43.248218   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.248225   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:43.248230   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:43.248283   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:43.275958   46141 cri.go:89] found id: ""
	I1202 19:25:43.275971   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.275979   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:43.275984   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:43.276040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:43.311994   46141 cri.go:89] found id: ""
	I1202 19:25:43.312006   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.312013   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:43.312018   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:43.312070   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:43.338867   46141 cri.go:89] found id: ""
	I1202 19:25:43.338881   46141 logs.go:282] 0 containers: []
	W1202 19:25:43.338888   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:43.338896   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:43.338907   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:43.370951   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:43.370966   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:43.439006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:43.439023   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:43.449811   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:43.449827   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:43.523274   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:43.515029   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.515710   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.517443   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.518065   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:43.519776   12052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:43.523283   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:43.523293   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.099316   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:46.109738   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:46.109799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:46.135973   46141 cri.go:89] found id: ""
	I1202 19:25:46.135986   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.135993   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:46.135998   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:46.136053   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:46.160433   46141 cri.go:89] found id: ""
	I1202 19:25:46.160447   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.160454   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:46.160459   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:46.160562   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:46.185345   46141 cri.go:89] found id: ""
	I1202 19:25:46.185358   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.185365   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:46.185371   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:46.185431   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:46.209708   46141 cri.go:89] found id: ""
	I1202 19:25:46.209721   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.209728   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:46.209733   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:46.209799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:46.234274   46141 cri.go:89] found id: ""
	I1202 19:25:46.234288   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.234294   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:46.234299   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:46.234363   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:46.259257   46141 cri.go:89] found id: ""
	I1202 19:25:46.259271   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.259277   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:46.259282   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:46.259336   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:46.282587   46141 cri.go:89] found id: ""
	I1202 19:25:46.282601   46141 logs.go:282] 0 containers: []
	W1202 19:25:46.282607   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:46.282620   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:46.282630   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:46.360010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:46.351882   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.352560   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354236   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.354883   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:46.356438   12133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:46.360029   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:46.360040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:46.435864   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:46.435883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:46.464582   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:46.464597   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:46.531766   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:46.531784   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.042500   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:49.053773   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:49.053830   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:49.079262   46141 cri.go:89] found id: ""
	I1202 19:25:49.079276   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.079282   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:49.079288   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:49.079342   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:49.104725   46141 cri.go:89] found id: ""
	I1202 19:25:49.104738   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.104745   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:49.104759   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:49.104814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:49.133788   46141 cri.go:89] found id: ""
	I1202 19:25:49.133801   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.133808   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:49.133824   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:49.133880   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:49.159349   46141 cri.go:89] found id: ""
	I1202 19:25:49.159371   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.159379   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:49.159384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:49.159443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:49.197548   46141 cri.go:89] found id: ""
	I1202 19:25:49.197562   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.197569   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:49.197574   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:49.197641   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:49.223472   46141 cri.go:89] found id: ""
	I1202 19:25:49.223485   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.223492   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:49.223498   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:49.223558   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:49.247894   46141 cri.go:89] found id: ""
	I1202 19:25:49.247921   46141 logs.go:282] 0 containers: []
	W1202 19:25:49.247929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:49.247936   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:49.247949   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:49.331462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:49.331482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:49.370297   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:49.370316   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:49.439052   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:49.439071   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:49.449975   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:49.449991   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:49.513463   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:49.505741   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.506188   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.507718   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.508375   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:49.510100   12262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.015209   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:52.026897   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:52.026956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:52.053387   46141 cri.go:89] found id: ""
	I1202 19:25:52.053401   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.053408   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:52.053416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:52.053475   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:52.079773   46141 cri.go:89] found id: ""
	I1202 19:25:52.079787   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.079793   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:52.079799   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:52.079854   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:52.107526   46141 cri.go:89] found id: ""
	I1202 19:25:52.107539   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.107546   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:52.107551   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:52.107610   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:52.134040   46141 cri.go:89] found id: ""
	I1202 19:25:52.134054   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.134061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:52.134066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:52.134124   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:52.160401   46141 cri.go:89] found id: ""
	I1202 19:25:52.160421   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.160445   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:52.160450   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:52.160512   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:52.186015   46141 cri.go:89] found id: ""
	I1202 19:25:52.186029   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.186035   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:52.186041   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:52.186097   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:52.211315   46141 cri.go:89] found id: ""
	I1202 19:25:52.211328   46141 logs.go:282] 0 containers: []
	W1202 19:25:52.211335   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:52.211342   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:52.211352   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:52.281330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:52.281350   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:52.294618   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:52.294634   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:52.375867   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:52.366509   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.367120   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370063   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.370490   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:52.371961   12356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:52.375884   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:52.375895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:52.454410   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:52.454433   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:54.985073   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:54.997287   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:54.997351   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:55.033193   46141 cri.go:89] found id: ""
	I1202 19:25:55.033207   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.033214   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:55.033220   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:55.033285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:55.059947   46141 cri.go:89] found id: ""
	I1202 19:25:55.059961   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.059968   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:55.059973   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:55.060032   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:55.089719   46141 cri.go:89] found id: ""
	I1202 19:25:55.089731   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.089738   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:55.089744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:55.089804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:55.116791   46141 cri.go:89] found id: ""
	I1202 19:25:55.116805   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.116811   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:55.116816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:55.116872   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:55.144575   46141 cri.go:89] found id: ""
	I1202 19:25:55.144589   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.144597   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:55.144602   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:55.144663   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:55.170532   46141 cri.go:89] found id: ""
	I1202 19:25:55.170546   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.170553   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:55.170558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:55.170613   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:55.201295   46141 cri.go:89] found id: ""
	I1202 19:25:55.201309   46141 logs.go:282] 0 containers: []
	W1202 19:25:55.201317   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:55.201324   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:55.201335   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:55.265951   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:55.265968   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:25:55.276457   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:55.276472   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:55.358449   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:55.350289   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.351045   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.352737   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.353291   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:55.354841   12458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:55.358470   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:55.358481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:55.438382   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:55.438401   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:57.969884   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:25:57.980234   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:25:57.980287   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:25:58.005151   46141 cri.go:89] found id: ""
	I1202 19:25:58.005165   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.005172   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:25:58.005177   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:25:58.005234   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:25:58.032254   46141 cri.go:89] found id: ""
	I1202 19:25:58.032267   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.032274   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:25:58.032279   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:25:58.032338   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:25:58.058556   46141 cri.go:89] found id: ""
	I1202 19:25:58.058570   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.058578   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:25:58.058583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:25:58.058640   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:25:58.084123   46141 cri.go:89] found id: ""
	I1202 19:25:58.084136   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.084143   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:25:58.084148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:25:58.084204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:25:58.110792   46141 cri.go:89] found id: ""
	I1202 19:25:58.110806   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.110812   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:25:58.110820   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:25:58.110877   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:25:58.136499   46141 cri.go:89] found id: ""
	I1202 19:25:58.136512   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.136519   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:25:58.136524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:25:58.136585   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:25:58.162083   46141 cri.go:89] found id: ""
	I1202 19:25:58.162096   46141 logs.go:282] 0 containers: []
	W1202 19:25:58.162104   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:25:58.162111   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:25:58.162121   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:25:58.223736   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:25:58.216185   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.216966   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218585   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.218890   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:25:58.220343   12557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:25:58.223745   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:25:58.223756   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:25:58.308033   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:25:58.308051   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:25:58.341126   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:25:58.341141   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:25:58.407826   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:25:58.407843   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:00.920333   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:00.930302   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:00.930359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:00.954390   46141 cri.go:89] found id: ""
	I1202 19:26:00.954404   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.954411   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:00.954416   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:00.954483   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:00.980266   46141 cri.go:89] found id: ""
	I1202 19:26:00.980280   46141 logs.go:282] 0 containers: []
	W1202 19:26:00.980287   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:00.980292   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:00.980360   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:01.008460   46141 cri.go:89] found id: ""
	I1202 19:26:01.008482   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.008488   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:01.008493   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:01.008547   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:01.036672   46141 cri.go:89] found id: ""
	I1202 19:26:01.036686   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.036692   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:01.036698   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:01.036753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:01.061548   46141 cri.go:89] found id: ""
	I1202 19:26:01.061562   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.061568   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:01.061573   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:01.061629   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:01.086617   46141 cri.go:89] found id: ""
	I1202 19:26:01.086631   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.086638   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:01.086643   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:01.086701   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:01.111676   46141 cri.go:89] found id: ""
	I1202 19:26:01.111690   46141 logs.go:282] 0 containers: []
	W1202 19:26:01.111697   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:01.111704   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:01.111714   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:01.176991   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:01.177017   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:01.188305   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:01.188339   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:01.254955   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:01.246696   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.247518   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249195   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.249523   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:01.251117   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:01.254966   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:01.254977   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:01.336825   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:01.336852   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:03.866716   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:03.876694   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:03.876752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:03.900150   46141 cri.go:89] found id: ""
	I1202 19:26:03.900164   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.900170   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:03.900176   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:03.900231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:03.928045   46141 cri.go:89] found id: ""
	I1202 19:26:03.928059   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.928066   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:03.928071   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:03.928128   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:03.952359   46141 cri.go:89] found id: ""
	I1202 19:26:03.952372   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.952379   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:03.952384   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:03.952439   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:03.977113   46141 cri.go:89] found id: ""
	I1202 19:26:03.977127   46141 logs.go:282] 0 containers: []
	W1202 19:26:03.977134   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:03.977139   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:03.977195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:04.001871   46141 cri.go:89] found id: ""
	I1202 19:26:04.001884   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.001890   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:04.001896   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:04.001950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:04.029122   46141 cri.go:89] found id: ""
	I1202 19:26:04.029136   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.029143   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:04.029148   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:04.029206   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:04.059191   46141 cri.go:89] found id: ""
	I1202 19:26:04.059205   46141 logs.go:282] 0 containers: []
	W1202 19:26:04.059212   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:04.059219   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:04.059228   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:04.125149   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:04.125166   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:04.136144   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:04.136159   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:04.198077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:04.190117   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.190591   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192194   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.192641   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:04.194070   12769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:04.198088   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:04.198098   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:04.273217   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:04.273235   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:06.807224   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:06.817250   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:06.817318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:06.845880   46141 cri.go:89] found id: ""
	I1202 19:26:06.845895   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.845902   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:06.845908   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:06.845963   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:06.870846   46141 cri.go:89] found id: ""
	I1202 19:26:06.870859   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.870866   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:06.870871   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:06.870927   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:06.896774   46141 cri.go:89] found id: ""
	I1202 19:26:06.896788   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.896794   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:06.896800   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:06.896857   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:06.924394   46141 cri.go:89] found id: ""
	I1202 19:26:06.924407   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.924414   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:06.924419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:06.924477   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:06.951775   46141 cri.go:89] found id: ""
	I1202 19:26:06.951789   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.951796   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:06.951804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:06.951865   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:06.976656   46141 cri.go:89] found id: ""
	I1202 19:26:06.976674   46141 logs.go:282] 0 containers: []
	W1202 19:26:06.976682   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:06.976687   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:06.976743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:07.002712   46141 cri.go:89] found id: ""
	I1202 19:26:07.002726   46141 logs.go:282] 0 containers: []
	W1202 19:26:07.002741   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:07.002753   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:07.002764   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:07.071978   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:07.063868   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.064510   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066274   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.066863   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:07.068501   12864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:07.071988   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:07.072001   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:07.148506   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:07.148525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:07.177526   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:07.177542   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:07.244597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:07.244614   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:09.755980   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:09.766062   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:09.766136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:09.791272   46141 cri.go:89] found id: ""
	I1202 19:26:09.791285   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.791292   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:09.791297   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:09.791352   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:09.819809   46141 cri.go:89] found id: ""
	I1202 19:26:09.819822   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.819829   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:09.819834   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:09.819890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:09.845138   46141 cri.go:89] found id: ""
	I1202 19:26:09.845151   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.845158   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:09.845163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:09.845233   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:09.869181   46141 cri.go:89] found id: ""
	I1202 19:26:09.869194   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.869201   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:09.869215   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:09.869269   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:09.894166   46141 cri.go:89] found id: ""
	I1202 19:26:09.894180   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.894187   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:09.894192   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:09.894246   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:09.918581   46141 cri.go:89] found id: ""
	I1202 19:26:09.918594   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.918601   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:09.918606   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:09.918670   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:09.943199   46141 cri.go:89] found id: ""
	I1202 19:26:09.943213   46141 logs.go:282] 0 containers: []
	W1202 19:26:09.943219   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:09.943227   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:09.943238   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:10.008528   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:10.008545   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:10.019265   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:10.019283   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:10.097788   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:10.089795   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.090510   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092137   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.092603   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:10.094152   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:10.097798   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:10.097814   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:10.175343   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:10.175361   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:12.705105   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:12.714930   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:12.714992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:12.738794   46141 cri.go:89] found id: ""
	I1202 19:26:12.738808   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.738814   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:12.738819   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:12.738893   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:12.763061   46141 cri.go:89] found id: ""
	I1202 19:26:12.763074   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.763088   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:12.763094   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:12.763147   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:12.789884   46141 cri.go:89] found id: ""
	I1202 19:26:12.789897   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.789904   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:12.789909   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:12.789967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:12.815897   46141 cri.go:89] found id: ""
	I1202 19:26:12.815911   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.815918   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:12.815923   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:12.815980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:12.842434   46141 cri.go:89] found id: ""
	I1202 19:26:12.842448   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.842455   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:12.842461   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:12.842521   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:12.867046   46141 cri.go:89] found id: ""
	I1202 19:26:12.867059   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.867066   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:12.867071   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:12.867136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:12.891464   46141 cri.go:89] found id: ""
	I1202 19:26:12.891478   46141 logs.go:282] 0 containers: []
	W1202 19:26:12.891484   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:12.891492   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:12.891503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:12.902121   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:12.902136   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:12.963892   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:12.955981   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.956766   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958385   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.958708   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:12.960199   13076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:12.963902   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:12.963913   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:13.043923   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:13.043944   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:13.073893   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:13.073909   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:15.646846   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:15.656672   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:15.656727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:15.685223   46141 cri.go:89] found id: ""
	I1202 19:26:15.685236   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.685243   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:15.685249   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:15.685309   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:15.710499   46141 cri.go:89] found id: ""
	I1202 19:26:15.710513   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.710520   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:15.710526   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:15.710582   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:15.734748   46141 cri.go:89] found id: ""
	I1202 19:26:15.734762   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.734775   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:15.734780   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:15.734833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:15.759539   46141 cri.go:89] found id: ""
	I1202 19:26:15.759551   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.759558   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:15.759564   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:15.759617   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:15.788358   46141 cri.go:89] found id: ""
	I1202 19:26:15.788371   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.788378   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:15.788383   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:15.788443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:15.813365   46141 cri.go:89] found id: ""
	I1202 19:26:15.813379   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.813386   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:15.813391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:15.813445   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:15.842535   46141 cri.go:89] found id: ""
	I1202 19:26:15.842550   46141 logs.go:282] 0 containers: []
	W1202 19:26:15.842558   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:15.842565   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:15.842576   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:15.853891   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:15.853906   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:15.921614   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:15.914053   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.914564   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916003   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.916376   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:15.917614   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:15.921632   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:15.921643   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:15.997309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:15.997326   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:16.029023   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:16.029039   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.596080   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:18.605748   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:18.605804   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:18.630525   46141 cri.go:89] found id: ""
	I1202 19:26:18.630539   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.630546   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:18.630551   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:18.630608   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:18.655399   46141 cri.go:89] found id: ""
	I1202 19:26:18.655412   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.655419   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:18.655425   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:18.655479   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:18.681041   46141 cri.go:89] found id: ""
	I1202 19:26:18.681054   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.681061   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:18.681067   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:18.681123   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:18.710155   46141 cri.go:89] found id: ""
	I1202 19:26:18.710168   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.710181   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:18.710187   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:18.710241   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:18.735242   46141 cri.go:89] found id: ""
	I1202 19:26:18.735256   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.735263   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:18.735268   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:18.735327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:18.761061   46141 cri.go:89] found id: ""
	I1202 19:26:18.761074   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.761081   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:18.761087   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:18.761149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:18.788428   46141 cri.go:89] found id: ""
	I1202 19:26:18.788441   46141 logs.go:282] 0 containers: []
	W1202 19:26:18.788448   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:18.788456   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:18.788475   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:18.822471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:18.822487   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:18.888827   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:18.888844   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:18.899937   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:18.899952   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:18.968344   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:18.961155   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.961520   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963096   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.963416   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:18.964883   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:18.968353   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:18.968365   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.544554   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:21.555728   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:21.555784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:21.584623   46141 cri.go:89] found id: ""
	I1202 19:26:21.584639   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.584646   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:21.584650   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:21.584710   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:21.614647   46141 cri.go:89] found id: ""
	I1202 19:26:21.614660   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.614668   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:21.614672   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:21.614731   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:21.642925   46141 cri.go:89] found id: ""
	I1202 19:26:21.642938   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.642945   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:21.642950   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:21.643003   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:21.668180   46141 cri.go:89] found id: ""
	I1202 19:26:21.668194   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.668202   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:21.668207   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:21.668263   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:21.693295   46141 cri.go:89] found id: ""
	I1202 19:26:21.693308   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.693315   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:21.693321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:21.693375   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:21.720442   46141 cri.go:89] found id: ""
	I1202 19:26:21.720456   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.720463   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:21.720477   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:21.720550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:21.745858   46141 cri.go:89] found id: ""
	I1202 19:26:21.745872   46141 logs.go:282] 0 containers: []
	W1202 19:26:21.745879   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:21.745887   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:21.745898   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:21.821815   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:21.821832   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:21.852228   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:21.852243   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:21.925590   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:21.925615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:21.936630   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:21.936646   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:22.000893   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:21.992158   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.992882   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.994656   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.995179   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:21.996825   13408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:24.501139   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:24.511236   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:24.511298   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:24.536070   46141 cri.go:89] found id: ""
	I1202 19:26:24.536084   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.536091   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:24.536096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:24.536152   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:24.570105   46141 cri.go:89] found id: ""
	I1202 19:26:24.570118   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.570125   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:24.570131   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:24.570195   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:24.602200   46141 cri.go:89] found id: ""
	I1202 19:26:24.602213   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.602220   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:24.602225   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:24.602286   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:24.627716   46141 cri.go:89] found id: ""
	I1202 19:26:24.627730   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.627737   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:24.627743   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:24.627799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:24.653555   46141 cri.go:89] found id: ""
	I1202 19:26:24.653568   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.653575   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:24.653580   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:24.653638   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:24.681296   46141 cri.go:89] found id: ""
	I1202 19:26:24.681310   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.681316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:24.681322   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:24.681376   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:24.707692   46141 cri.go:89] found id: ""
	I1202 19:26:24.707705   46141 logs.go:282] 0 containers: []
	W1202 19:26:24.707714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:24.707721   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:24.707731   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:24.782015   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:24.782033   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:24.809710   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:24.809725   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:24.880042   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:24.880061   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:24.890565   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:24.890580   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:24.952416   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:24.944479   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.945161   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.946873   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.947505   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:24.949103   13513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.452632   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:27.462873   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:27.462933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:27.487753   46141 cri.go:89] found id: ""
	I1202 19:26:27.487766   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.487773   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:27.487778   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:27.487835   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:27.512748   46141 cri.go:89] found id: ""
	I1202 19:26:27.512762   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.512771   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:27.512776   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:27.512833   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:27.542024   46141 cri.go:89] found id: ""
	I1202 19:26:27.542038   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.542045   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:27.542051   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:27.542109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:27.579960   46141 cri.go:89] found id: ""
	I1202 19:26:27.579973   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.579979   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:27.579989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:27.580045   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:27.608229   46141 cri.go:89] found id: ""
	I1202 19:26:27.608242   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.608250   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:27.608255   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:27.608318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:27.634613   46141 cri.go:89] found id: ""
	I1202 19:26:27.634626   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.634633   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:27.634639   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:27.634695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:27.659548   46141 cri.go:89] found id: ""
	I1202 19:26:27.659562   46141 logs.go:282] 0 containers: []
	W1202 19:26:27.659569   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:27.659576   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:27.659587   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:27.727694   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:27.720173   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.720588   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722165   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.722762   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:27.724256   13599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:27.727704   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:27.727715   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:27.802309   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:27.802327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:27.831471   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:27.831486   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:27.899227   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:27.899244   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.413752   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:30.423684   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:30.423741   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:30.447673   46141 cri.go:89] found id: ""
	I1202 19:26:30.447688   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.447695   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:30.447706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:30.447762   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:30.473178   46141 cri.go:89] found id: ""
	I1202 19:26:30.473191   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.473198   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:30.473203   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:30.473258   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:30.499098   46141 cri.go:89] found id: ""
	I1202 19:26:30.499112   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.499119   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:30.499124   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:30.499181   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:30.528083   46141 cri.go:89] found id: ""
	I1202 19:26:30.528096   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.528103   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:30.528108   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:30.528165   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:30.562772   46141 cri.go:89] found id: ""
	I1202 19:26:30.562784   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.562791   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:30.562796   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:30.562852   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:30.592139   46141 cri.go:89] found id: ""
	I1202 19:26:30.592152   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.592158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:30.592163   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:30.592217   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:30.624862   46141 cri.go:89] found id: ""
	I1202 19:26:30.624875   46141 logs.go:282] 0 containers: []
	W1202 19:26:30.624882   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:30.624889   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:30.624901   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:30.636356   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:30.636374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:30.698721   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:30.690521   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.691312   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.692970   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.693279   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:30.694784   13708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:30.698731   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:30.698745   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:30.775221   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:30.775240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:30.812702   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:30.812718   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.383460   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:33.393252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:33.393318   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:33.417381   46141 cri.go:89] found id: ""
	I1202 19:26:33.417394   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.417401   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:33.417407   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:33.417467   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:33.441554   46141 cri.go:89] found id: ""
	I1202 19:26:33.441567   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.441574   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:33.441580   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:33.441633   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:33.466601   46141 cri.go:89] found id: ""
	I1202 19:26:33.466615   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.466621   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:33.466627   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:33.466680   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:33.494897   46141 cri.go:89] found id: ""
	I1202 19:26:33.494910   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.494917   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:33.494922   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:33.494978   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:33.519464   46141 cri.go:89] found id: ""
	I1202 19:26:33.519478   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.519485   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:33.519490   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:33.519549   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:33.556189   46141 cri.go:89] found id: ""
	I1202 19:26:33.556203   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.556210   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:33.556216   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:33.556276   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:33.592420   46141 cri.go:89] found id: ""
	I1202 19:26:33.592436   46141 logs.go:282] 0 containers: []
	W1202 19:26:33.592442   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:33.592459   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:33.592469   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:33.669109   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:33.669128   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:33.703954   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:33.703970   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:33.773221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:33.773240   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:33.784054   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:33.784068   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:33.846758   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:33.838322   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.839078   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.840804   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.841128   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:33.842739   13829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:36.347013   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:36.357404   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:36.357461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:36.383307   46141 cri.go:89] found id: ""
	I1202 19:26:36.383322   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.383330   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:36.383336   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:36.383391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:36.409566   46141 cri.go:89] found id: ""
	I1202 19:26:36.409580   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.409588   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:36.409593   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:36.409682   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:36.435280   46141 cri.go:89] found id: ""
	I1202 19:26:36.435294   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.435300   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:36.435306   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:36.435366   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:36.460290   46141 cri.go:89] found id: ""
	I1202 19:26:36.460304   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.460310   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:36.460316   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:36.460368   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:36.484719   46141 cri.go:89] found id: ""
	I1202 19:26:36.484733   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.484740   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:36.484746   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:36.484800   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:36.510020   46141 cri.go:89] found id: ""
	I1202 19:26:36.510034   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.510042   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:36.510048   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:36.510106   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:36.536500   46141 cri.go:89] found id: ""
	I1202 19:26:36.536515   46141 logs.go:282] 0 containers: []
	W1202 19:26:36.536521   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:36.536529   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:36.536539   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:36.616617   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:36.616636   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:36.647169   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:36.647185   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:36.711768   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:36.711787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:36.723184   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:36.723200   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:36.795174   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:36.786043   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.786834   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.788445   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.789117   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:36.791007   13935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:39.296074   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:39.306024   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:39.306085   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:39.335889   46141 cri.go:89] found id: ""
	I1202 19:26:39.335915   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.335923   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:39.335928   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:39.335990   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:39.361424   46141 cri.go:89] found id: ""
	I1202 19:26:39.361438   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.361445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:39.361450   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:39.361505   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:39.387900   46141 cri.go:89] found id: ""
	I1202 19:26:39.387913   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.387920   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:39.387925   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:39.387988   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:39.413856   46141 cri.go:89] found id: ""
	I1202 19:26:39.413871   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.413878   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:39.413884   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:39.413938   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:39.439194   46141 cri.go:89] found id: ""
	I1202 19:26:39.439208   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.439215   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:39.439221   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:39.439278   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:39.465337   46141 cri.go:89] found id: ""
	I1202 19:26:39.465351   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.465359   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:39.465375   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:39.465442   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:39.493124   46141 cri.go:89] found id: ""
	I1202 19:26:39.493137   46141 logs.go:282] 0 containers: []
	W1202 19:26:39.493144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:39.493152   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:39.493162   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:39.573759   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:39.573780   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:39.608655   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:39.608671   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:39.681483   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:39.681503   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:39.692678   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:39.692693   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:39.753005   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:39.745469   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.746166   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747307   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.747932   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:39.749551   14040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:42.253264   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:42.266584   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:42.266662   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:42.301576   46141 cri.go:89] found id: ""
	I1202 19:26:42.301591   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.301599   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:42.301605   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:42.301727   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:42.360247   46141 cri.go:89] found id: ""
	I1202 19:26:42.360262   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.360269   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:42.360275   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:42.360344   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:42.390741   46141 cri.go:89] found id: ""
	I1202 19:26:42.390756   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.390766   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:42.390776   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:42.390853   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:42.419121   46141 cri.go:89] found id: ""
	I1202 19:26:42.419137   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.419144   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:42.419152   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:42.419225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:42.446778   46141 cri.go:89] found id: ""
	I1202 19:26:42.446792   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.446811   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:42.446816   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:42.446884   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:42.472520   46141 cri.go:89] found id: ""
	I1202 19:26:42.472534   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.472541   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:42.472546   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:42.472603   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:42.498770   46141 cri.go:89] found id: ""
	I1202 19:26:42.498783   46141 logs.go:282] 0 containers: []
	W1202 19:26:42.498789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:42.498797   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:42.498806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:42.579006   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:42.579025   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:42.609942   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:42.609958   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:42.683995   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:42.684022   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:42.695018   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:42.695038   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:42.757205   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:42.749273   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.750084   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751649   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.751951   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:42.753404   14145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.257372   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:45.279258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:45.279391   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:45.324360   46141 cri.go:89] found id: ""
	I1202 19:26:45.324374   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.324382   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:45.324389   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:45.324461   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:45.357406   46141 cri.go:89] found id: ""
	I1202 19:26:45.357438   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.357445   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:45.357451   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:45.357520   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:45.390814   46141 cri.go:89] found id: ""
	I1202 19:26:45.390829   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.390836   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:45.390842   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:45.390910   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:45.422248   46141 cri.go:89] found id: ""
	I1202 19:26:45.422262   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.422269   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:45.422274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:45.422331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:45.447593   46141 cri.go:89] found id: ""
	I1202 19:26:45.447607   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.447614   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:45.447618   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:45.447669   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:45.473750   46141 cri.go:89] found id: ""
	I1202 19:26:45.473763   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.473770   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:45.473775   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:45.473838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:45.502345   46141 cri.go:89] found id: ""
	I1202 19:26:45.502358   46141 logs.go:282] 0 containers: []
	W1202 19:26:45.502364   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:45.502373   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:45.502383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:45.569300   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:45.569319   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:45.581070   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:45.581086   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:45.647631   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:45.640250   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.640775   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642295   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.642715   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:45.644225   14236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:45.647641   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:45.647652   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:45.722681   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:45.722699   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:48.249966   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:48.259729   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:48.259788   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:48.284968   46141 cri.go:89] found id: ""
	I1202 19:26:48.284981   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.284995   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:48.285001   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:48.285058   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:48.312117   46141 cri.go:89] found id: ""
	I1202 19:26:48.312131   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.312138   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:48.312143   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:48.312196   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:48.338030   46141 cri.go:89] found id: ""
	I1202 19:26:48.338044   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.338050   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:48.338055   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:48.338108   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:48.363655   46141 cri.go:89] found id: ""
	I1202 19:26:48.363668   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.363675   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:48.363680   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:48.363732   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:48.388544   46141 cri.go:89] found id: ""
	I1202 19:26:48.388565   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.388572   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:48.388577   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:48.388631   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:48.413919   46141 cri.go:89] found id: ""
	I1202 19:26:48.413932   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.413939   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:48.413962   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:48.414018   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:48.438768   46141 cri.go:89] found id: ""
	I1202 19:26:48.438782   46141 logs.go:282] 0 containers: []
	W1202 19:26:48.438789   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:48.438796   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:48.438806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:48.508480   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:48.508498   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:48.519336   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:48.519354   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:48.612485   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:48.603737   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.604095   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605597   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.605931   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:48.607339   14337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:48.612495   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:48.612505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:48.689541   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:48.689559   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.220741   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:51.230995   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:51.231052   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:51.257767   46141 cri.go:89] found id: ""
	I1202 19:26:51.257786   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.257794   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:51.257801   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:51.257856   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:51.282338   46141 cri.go:89] found id: ""
	I1202 19:26:51.282351   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.282358   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:51.282363   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:51.282425   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:51.311031   46141 cri.go:89] found id: ""
	I1202 19:26:51.311044   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.311051   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:51.311056   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:51.311111   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:51.339385   46141 cri.go:89] found id: ""
	I1202 19:26:51.339399   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.339405   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:51.339410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:51.339476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:51.368365   46141 cri.go:89] found id: ""
	I1202 19:26:51.368379   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.368386   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:51.368391   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:51.368455   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:51.393598   46141 cri.go:89] found id: ""
	I1202 19:26:51.393611   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.393618   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:51.393623   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:51.393696   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:51.423516   46141 cri.go:89] found id: ""
	I1202 19:26:51.423529   46141 logs.go:282] 0 containers: []
	W1202 19:26:51.423536   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:51.423543   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:51.423553   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:51.488010   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:51.480076   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.480890   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482553   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.482887   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:51.484436   14437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:51.488020   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:51.488031   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:51.568503   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:51.568521   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:51.604611   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:51.604626   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:51.673166   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:51.673184   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:54.184676   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:54.194875   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:54.194933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:54.219830   46141 cri.go:89] found id: ""
	I1202 19:26:54.219850   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.219857   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:54.219863   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:54.219922   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:54.245201   46141 cri.go:89] found id: ""
	I1202 19:26:54.245214   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.245221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:54.245228   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:54.245295   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:54.270718   46141 cri.go:89] found id: ""
	I1202 19:26:54.270732   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.270739   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:54.270744   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:54.270799   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:54.295488   46141 cri.go:89] found id: ""
	I1202 19:26:54.295501   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.295508   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:54.295513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:54.295568   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:54.320597   46141 cri.go:89] found id: ""
	I1202 19:26:54.320610   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.320617   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:54.320622   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:54.320675   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:54.348002   46141 cri.go:89] found id: ""
	I1202 19:26:54.348017   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.348024   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:54.348029   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:54.348089   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:54.374189   46141 cri.go:89] found id: ""
	I1202 19:26:54.374203   46141 logs.go:282] 0 containers: []
	W1202 19:26:54.374209   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:54.374217   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:54.374229   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:54.439569   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:54.429536   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.430781   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.433917   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.434361   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:54.435887   14543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:54.439581   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:54.439594   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:54.524214   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:54.524233   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:54.564820   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:54.564841   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:26:54.639908   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:54.639928   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.151760   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:26:57.161952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:26:57.162007   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:26:57.186061   46141 cri.go:89] found id: ""
	I1202 19:26:57.186074   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.186081   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:26:57.186087   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:26:57.186144   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:26:57.211829   46141 cri.go:89] found id: ""
	I1202 19:26:57.211843   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.211850   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:26:57.211856   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:26:57.211914   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:26:57.237584   46141 cri.go:89] found id: ""
	I1202 19:26:57.237598   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.237605   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:26:57.237610   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:26:57.237697   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:26:57.266726   46141 cri.go:89] found id: ""
	I1202 19:26:57.266740   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.266746   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:26:57.266752   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:26:57.266810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:26:57.293971   46141 cri.go:89] found id: ""
	I1202 19:26:57.293984   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.293991   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:26:57.293996   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:26:57.294050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:26:57.322602   46141 cri.go:89] found id: ""
	I1202 19:26:57.322615   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.322622   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:26:57.322628   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:26:57.322685   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:26:57.347221   46141 cri.go:89] found id: ""
	I1202 19:26:57.347234   46141 logs.go:282] 0 containers: []
	W1202 19:26:57.347249   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:26:57.347257   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:26:57.347267   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:26:57.358475   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:26:57.358490   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:26:57.420357   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:26:57.412236   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.412944   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.414687   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.415347   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:26:57.416972   14650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:26:57.420367   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:26:57.420378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:26:57.498037   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:26:57.498057   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:26:57.530853   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:26:57.530870   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.105404   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:00.167692   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:00.167773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:00.310630   46141 cri.go:89] found id: ""
	I1202 19:27:00.310644   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.310652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:00.310659   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:00.310726   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:00.379652   46141 cri.go:89] found id: ""
	I1202 19:27:00.379665   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.379673   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:00.379678   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:00.379740   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:00.417470   46141 cri.go:89] found id: ""
	I1202 19:27:00.417487   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.417496   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:00.417501   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:00.417571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:00.459129   46141 cri.go:89] found id: ""
	I1202 19:27:00.459144   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.459151   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:00.459157   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:00.459225   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:00.491958   46141 cri.go:89] found id: ""
	I1202 19:27:00.491973   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.491980   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:00.491986   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:00.492050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:00.522076   46141 cri.go:89] found id: ""
	I1202 19:27:00.522091   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.522098   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:00.522110   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:00.522185   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:00.560640   46141 cri.go:89] found id: ""
	I1202 19:27:00.560654   46141 logs.go:282] 0 containers: []
	W1202 19:27:00.560661   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:00.560668   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:00.560677   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:00.652444   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:00.652464   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:00.684426   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:00.684441   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:00.751419   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:00.751437   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:00.763771   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:00.763786   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:00.826022   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:00.817872   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.818580   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820137   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.820449   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:00.822046   14777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.326866   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:03.336590   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:03.336644   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:03.361031   46141 cri.go:89] found id: ""
	I1202 19:27:03.361045   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.361051   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:03.361057   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:03.361109   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:03.385187   46141 cri.go:89] found id: ""
	I1202 19:27:03.385201   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.385208   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:03.385214   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:03.385268   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:03.410330   46141 cri.go:89] found id: ""
	I1202 19:27:03.410343   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.410350   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:03.410355   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:03.410412   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:03.435485   46141 cri.go:89] found id: ""
	I1202 19:27:03.435499   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.435505   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:03.435511   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:03.435565   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:03.460310   46141 cri.go:89] found id: ""
	I1202 19:27:03.460323   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.460330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:03.460335   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:03.460389   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:03.488041   46141 cri.go:89] found id: ""
	I1202 19:27:03.488054   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.488061   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:03.488066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:03.488120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:03.512748   46141 cri.go:89] found id: ""
	I1202 19:27:03.512761   46141 logs.go:282] 0 containers: []
	W1202 19:27:03.512768   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:03.512776   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:03.512787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:03.523642   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:03.523658   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:03.617573   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:03.607856   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.608861   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610107   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.610726   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:03.612297   14861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:03.617591   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:03.617602   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:03.694365   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:03.694383   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:03.726522   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:03.726537   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.302579   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:06.312543   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:06.312604   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:06.337638   46141 cri.go:89] found id: ""
	I1202 19:27:06.337693   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.337700   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:06.337706   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:06.337764   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:06.362621   46141 cri.go:89] found id: ""
	I1202 19:27:06.362634   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.362641   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:06.362646   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:06.362698   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:06.387105   46141 cri.go:89] found id: ""
	I1202 19:27:06.387121   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.387127   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:06.387133   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:06.387186   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:06.415681   46141 cri.go:89] found id: ""
	I1202 19:27:06.415694   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.415700   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:06.415706   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:06.415760   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:06.444254   46141 cri.go:89] found id: ""
	I1202 19:27:06.444267   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.444274   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:06.444279   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:06.444337   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:06.468778   46141 cri.go:89] found id: ""
	I1202 19:27:06.468791   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.468799   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:06.468805   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:06.468859   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:06.493545   46141 cri.go:89] found id: ""
	I1202 19:27:06.493558   46141 logs.go:282] 0 containers: []
	W1202 19:27:06.493564   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:06.493572   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:06.493583   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:06.567943   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:06.559330   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.560393   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.561970   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.562264   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:06.563594   14961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:06.567953   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:06.567963   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:06.656325   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:06.656344   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:06.685907   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:06.685923   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:06.756875   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:06.756894   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.270257   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:09.280597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:09.280658   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:09.304838   46141 cri.go:89] found id: ""
	I1202 19:27:09.304856   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.304863   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:09.304872   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:09.304926   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:09.329409   46141 cri.go:89] found id: ""
	I1202 19:27:09.329422   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.329430   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:09.329435   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:09.329491   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:09.353934   46141 cri.go:89] found id: ""
	I1202 19:27:09.353948   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.353954   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:09.353960   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:09.354016   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:09.379084   46141 cri.go:89] found id: ""
	I1202 19:27:09.379098   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.379105   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:09.379111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:09.379166   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:09.404377   46141 cri.go:89] found id: ""
	I1202 19:27:09.404391   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.404398   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:09.404403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:09.404459   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:09.429248   46141 cri.go:89] found id: ""
	I1202 19:27:09.429262   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.429269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:09.429274   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:09.429331   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:09.453340   46141 cri.go:89] found id: ""
	I1202 19:27:09.453354   46141 logs.go:282] 0 containers: []
	W1202 19:27:09.453360   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:09.453367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:09.453378   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:09.519114   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:09.519131   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:09.530268   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:09.530282   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:09.622354   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:09.612897   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.613708   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615186   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.615757   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:09.617773   15073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:09.622364   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:09.622374   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:09.698919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:09.698936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.231072   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:12.240732   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:12.240796   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:12.267547   46141 cri.go:89] found id: ""
	I1202 19:27:12.267560   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.267566   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:12.267572   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:12.267626   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:12.291129   46141 cri.go:89] found id: ""
	I1202 19:27:12.291143   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.291150   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:12.291155   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:12.291209   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:12.316228   46141 cri.go:89] found id: ""
	I1202 19:27:12.316242   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.316248   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:12.316253   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:12.316305   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:12.340306   46141 cri.go:89] found id: ""
	I1202 19:27:12.340319   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.340326   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:12.340331   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:12.340386   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:12.365210   46141 cri.go:89] found id: ""
	I1202 19:27:12.365224   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.365230   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:12.365239   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:12.365299   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:12.393299   46141 cri.go:89] found id: ""
	I1202 19:27:12.393312   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.393319   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:12.393327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:12.393387   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:12.418063   46141 cri.go:89] found id: ""
	I1202 19:27:12.418089   46141 logs.go:282] 0 containers: []
	W1202 19:27:12.418096   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:12.418104   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:12.418114   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:12.450419   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:12.450434   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:12.520281   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:12.520300   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:12.531244   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:12.531260   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:12.614672   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:12.606974   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.607492   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609182   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.609779   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:12.611155   15191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:12.614681   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:12.614691   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.191935   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:15.202075   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:15.202136   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:15.227991   46141 cri.go:89] found id: ""
	I1202 19:27:15.228004   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.228011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:15.228016   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:15.228073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:15.253837   46141 cri.go:89] found id: ""
	I1202 19:27:15.253850   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.253856   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:15.253861   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:15.253916   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:15.279658   46141 cri.go:89] found id: ""
	I1202 19:27:15.279671   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.279677   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:15.279682   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:15.279735   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:15.303415   46141 cri.go:89] found id: ""
	I1202 19:27:15.303429   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.303435   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:15.303440   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:15.303496   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:15.327738   46141 cri.go:89] found id: ""
	I1202 19:27:15.327752   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.327759   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:15.327764   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:15.327818   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:15.353097   46141 cri.go:89] found id: ""
	I1202 19:27:15.353110   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.353117   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:15.353122   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:15.353175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:15.377713   46141 cri.go:89] found id: ""
	I1202 19:27:15.377726   46141 logs.go:282] 0 containers: []
	W1202 19:27:15.377734   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:15.377741   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:15.377751   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:15.443006   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:15.443024   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:15.453500   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:15.453519   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:15.518415   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:15.510822   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.511600   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513071   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.513530   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:15.514994   15286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:15.518425   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:15.518438   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:15.596810   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:15.596828   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:18.130179   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:18.140204   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:18.140265   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:18.167800   46141 cri.go:89] found id: ""
	I1202 19:27:18.167814   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.167821   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:18.167826   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:18.167882   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:18.191990   46141 cri.go:89] found id: ""
	I1202 19:27:18.192003   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.192010   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:18.192015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:18.192072   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:18.216815   46141 cri.go:89] found id: ""
	I1202 19:27:18.216828   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.216835   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:18.216840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:18.216894   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:18.240868   46141 cri.go:89] found id: ""
	I1202 19:27:18.240881   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.240888   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:18.240894   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:18.240950   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:18.265457   46141 cri.go:89] found id: ""
	I1202 19:27:18.265470   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.265476   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:18.265482   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:18.265533   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:18.289248   46141 cri.go:89] found id: ""
	I1202 19:27:18.289262   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.289269   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:18.289275   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:18.289339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:18.312672   46141 cri.go:89] found id: ""
	I1202 19:27:18.312685   46141 logs.go:282] 0 containers: []
	W1202 19:27:18.312692   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:18.312700   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:18.312710   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:18.380764   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:18.380781   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:18.391485   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:18.391501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:18.453699   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:18.445756   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.446556   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448225   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.448811   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:18.450496   15391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:18.453709   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:18.453720   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:18.530116   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:18.530134   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.069567   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:21.079484   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:21.079550   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:21.103488   46141 cri.go:89] found id: ""
	I1202 19:27:21.103503   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.103511   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:21.103517   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:21.103572   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:21.130794   46141 cri.go:89] found id: ""
	I1202 19:27:21.130807   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.130814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:21.130819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:21.130876   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:21.154925   46141 cri.go:89] found id: ""
	I1202 19:27:21.154940   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.154946   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:21.154952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:21.155008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:21.183874   46141 cri.go:89] found id: ""
	I1202 19:27:21.183887   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.183895   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:21.183900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:21.183956   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:21.208723   46141 cri.go:89] found id: ""
	I1202 19:27:21.208736   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.208744   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:21.208750   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:21.208805   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:21.233965   46141 cri.go:89] found id: ""
	I1202 19:27:21.233978   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.233985   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:21.233990   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:21.234046   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:21.257686   46141 cri.go:89] found id: ""
	I1202 19:27:21.257699   46141 logs.go:282] 0 containers: []
	W1202 19:27:21.257706   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:21.257714   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:21.257724   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:21.318236   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:21.310432   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.311107   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.312717   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.313282   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:21.314957   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:21.318250   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:21.318261   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:21.395292   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:21.395310   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:21.422658   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:21.422674   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:21.489157   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:21.489174   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.001769   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:24.011691   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:24.011752   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:24.042533   46141 cri.go:89] found id: ""
	I1202 19:27:24.042554   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.042561   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:24.042566   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:24.042624   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:24.070666   46141 cri.go:89] found id: ""
	I1202 19:27:24.070679   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.070686   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:24.070691   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:24.070753   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:24.095535   46141 cri.go:89] found id: ""
	I1202 19:27:24.095549   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.095556   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:24.095561   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:24.095619   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:24.123758   46141 cri.go:89] found id: ""
	I1202 19:27:24.123772   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.123779   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:24.123784   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:24.123838   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:24.149095   46141 cri.go:89] found id: ""
	I1202 19:27:24.149108   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.149114   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:24.149120   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:24.149175   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:24.174002   46141 cri.go:89] found id: ""
	I1202 19:27:24.174015   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.174022   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:24.174027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:24.174125   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:24.200105   46141 cri.go:89] found id: ""
	I1202 19:27:24.200119   46141 logs.go:282] 0 containers: []
	W1202 19:27:24.200126   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:24.200133   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:24.200144   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:24.266202   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:24.266219   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:24.277238   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:24.277253   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:24.343395   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:24.336040   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.336700   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338307   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.338687   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:24.340133   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:24.343404   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:24.343414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:24.424919   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:24.424936   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:26.953925   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:26.963713   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:26.963769   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:26.988142   46141 cri.go:89] found id: ""
	I1202 19:27:26.988156   46141 logs.go:282] 0 containers: []
	W1202 19:27:26.988163   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:26.988168   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:26.988223   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:27.013673   46141 cri.go:89] found id: ""
	I1202 19:27:27.013687   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.013694   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:27.013699   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:27.013754   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:27.039371   46141 cri.go:89] found id: ""
	I1202 19:27:27.039384   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.039391   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:27.039396   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:27.039452   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:27.062786   46141 cri.go:89] found id: ""
	I1202 19:27:27.062800   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.062807   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:27.062812   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:27.062868   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:27.087058   46141 cri.go:89] found id: ""
	I1202 19:27:27.087072   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.087078   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:27.087083   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:27.087139   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:27.111397   46141 cri.go:89] found id: ""
	I1202 19:27:27.111410   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.111417   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:27.111422   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:27.111474   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:27.134753   46141 cri.go:89] found id: ""
	I1202 19:27:27.134774   46141 logs.go:282] 0 containers: []
	W1202 19:27:27.134781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:27.134788   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:27.134798   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:27.200051   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:27.200069   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:27.210589   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:27.210603   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:27.274673   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:27.267276   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.268043   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269599   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.269964   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:27.271462   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:27.274684   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:27.274695   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:27.350589   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:27.350607   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:29.879009   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:29.888757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:29.888814   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:29.914106   46141 cri.go:89] found id: ""
	I1202 19:27:29.914119   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.914126   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:29.914131   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:29.914198   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:29.945870   46141 cri.go:89] found id: ""
	I1202 19:27:29.945883   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.945890   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:29.945895   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:29.945951   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:29.972147   46141 cri.go:89] found id: ""
	I1202 19:27:29.972161   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.972168   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:29.972173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:29.972237   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:29.999569   46141 cri.go:89] found id: ""
	I1202 19:27:29.999583   46141 logs.go:282] 0 containers: []
	W1202 19:27:29.999590   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:29.999595   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:29.999654   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:30.048258   46141 cri.go:89] found id: ""
	I1202 19:27:30.048273   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.048281   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:30.048286   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:30.048361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:30.083224   46141 cri.go:89] found id: ""
	I1202 19:27:30.083238   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.083245   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:30.083251   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:30.083308   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:30.113945   46141 cri.go:89] found id: ""
	I1202 19:27:30.113959   46141 logs.go:282] 0 containers: []
	W1202 19:27:30.113966   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:30.113975   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:30.113986   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:30.192106   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:30.192125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:30.221887   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:30.221904   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:30.290188   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:30.290204   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:30.301167   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:30.301182   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:30.362881   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:30.354371   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.354912   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.356661   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.357283   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:30.358977   15823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:32.863109   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:32.872876   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:32.872937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:32.897586   46141 cri.go:89] found id: ""
	I1202 19:27:32.897603   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.897610   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:32.897615   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:32.897706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:32.924245   46141 cri.go:89] found id: ""
	I1202 19:27:32.924258   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.924265   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:32.924270   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:32.924332   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:32.951911   46141 cri.go:89] found id: ""
	I1202 19:27:32.951925   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.951932   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:32.951938   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:32.951992   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:32.975852   46141 cri.go:89] found id: ""
	I1202 19:27:32.975865   46141 logs.go:282] 0 containers: []
	W1202 19:27:32.975872   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:32.975878   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:32.975933   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:33.000511   46141 cri.go:89] found id: ""
	I1202 19:27:33.000525   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.000532   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:33.000537   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:33.000591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:33.030910   46141 cri.go:89] found id: ""
	I1202 19:27:33.030924   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.030931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:33.030936   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:33.030993   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:33.055909   46141 cri.go:89] found id: ""
	I1202 19:27:33.055922   46141 logs.go:282] 0 containers: []
	W1202 19:27:33.055929   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:33.055937   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:33.055947   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:33.121449   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:33.121471   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:33.134922   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:33.134955   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:33.198500   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:33.189992   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.190784   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.192519   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.193191   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:33.194708   15913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:33.198512   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:33.198524   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:33.275340   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:33.275358   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:35.803184   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:35.814556   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:35.814622   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:35.843911   46141 cri.go:89] found id: ""
	I1202 19:27:35.843927   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.843934   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:35.843939   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:35.844010   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:35.872792   46141 cri.go:89] found id: ""
	I1202 19:27:35.872807   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.872814   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:35.872819   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:35.872885   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:35.899563   46141 cri.go:89] found id: ""
	I1202 19:27:35.899576   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.899583   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:35.899588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:35.899642   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:35.929110   46141 cri.go:89] found id: ""
	I1202 19:27:35.929133   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.929141   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:35.929147   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:35.929214   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:35.953603   46141 cri.go:89] found id: ""
	I1202 19:27:35.953617   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.953624   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:35.953629   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:35.953706   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:35.978487   46141 cri.go:89] found id: ""
	I1202 19:27:35.978501   46141 logs.go:282] 0 containers: []
	W1202 19:27:35.978508   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:35.978513   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:35.978571   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:36.002610   46141 cri.go:89] found id: ""
	I1202 19:27:36.002623   46141 logs.go:282] 0 containers: []
	W1202 19:27:36.002629   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:36.002636   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:36.002647   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:36.078660   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:36.078679   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:36.108572   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:36.108589   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:36.174842   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:36.174858   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:36.185725   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:36.185740   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:36.248843   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:36.241261   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.241837   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243308   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.243762   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:36.245492   16029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:38.749933   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:38.759902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:38.759959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:38.784371   46141 cri.go:89] found id: ""
	I1202 19:27:38.784384   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.784390   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:38.784396   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:38.784449   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:38.813903   46141 cri.go:89] found id: ""
	I1202 19:27:38.813918   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.813925   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:38.813930   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:38.813986   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:38.847704   46141 cri.go:89] found id: ""
	I1202 19:27:38.847718   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.847724   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:38.847730   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:38.847786   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:38.874126   46141 cri.go:89] found id: ""
	I1202 19:27:38.874139   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.874146   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:38.874151   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:38.874204   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:38.899808   46141 cri.go:89] found id: ""
	I1202 19:27:38.899822   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.899829   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:38.899835   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:38.899890   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:38.924777   46141 cri.go:89] found id: ""
	I1202 19:27:38.924791   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.924798   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:38.924804   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:38.924898   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:38.949761   46141 cri.go:89] found id: ""
	I1202 19:27:38.949774   46141 logs.go:282] 0 containers: []
	W1202 19:27:38.949781   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:38.949788   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:38.949802   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:39.008770   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:39.001782   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.002283   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003482   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.003943   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:39.005427   16117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:39.008780   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:39.008794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:39.090107   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:39.090125   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:39.122398   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:39.122414   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:39.187817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:39.187833   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.698611   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:41.708767   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:41.708837   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:41.733990   46141 cri.go:89] found id: ""
	I1202 19:27:41.734004   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.734011   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:41.734017   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:41.734080   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:41.759279   46141 cri.go:89] found id: ""
	I1202 19:27:41.759293   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.759299   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:41.759305   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:41.759359   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:41.793259   46141 cri.go:89] found id: ""
	I1202 19:27:41.793272   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.793278   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:41.793284   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:41.793339   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:41.821458   46141 cri.go:89] found id: ""
	I1202 19:27:41.821471   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.821484   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:41.821489   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:41.821545   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:41.849637   46141 cri.go:89] found id: ""
	I1202 19:27:41.849670   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.849678   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:41.849683   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:41.849743   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:41.881100   46141 cri.go:89] found id: ""
	I1202 19:27:41.881113   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.881121   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:41.881127   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:41.881189   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:41.906054   46141 cri.go:89] found id: ""
	I1202 19:27:41.906067   46141 logs.go:282] 0 containers: []
	W1202 19:27:41.906074   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:41.906082   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:41.906092   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:41.916746   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:41.916761   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:41.979747   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:41.971464   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.972400   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.973426   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.974900   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:41.975372   16224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:41.979757   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:41.979767   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:42.054766   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:42.054787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:42.086163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:42.086187   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.697773   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:44.707597   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:44.707659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:44.733158   46141 cri.go:89] found id: ""
	I1202 19:27:44.733184   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.733191   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:44.733196   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:44.733261   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:44.757757   46141 cri.go:89] found id: ""
	I1202 19:27:44.757771   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.757778   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:44.757784   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:44.757843   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:44.783874   46141 cri.go:89] found id: ""
	I1202 19:27:44.783888   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.783897   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:44.783902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:44.783959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:44.816248   46141 cri.go:89] found id: ""
	I1202 19:27:44.816261   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.816268   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:44.816273   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:44.816327   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:44.847419   46141 cri.go:89] found id: ""
	I1202 19:27:44.847433   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.847440   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:44.847445   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:44.847504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:44.873837   46141 cri.go:89] found id: ""
	I1202 19:27:44.873851   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.873858   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:44.873863   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:44.873918   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:44.897843   46141 cri.go:89] found id: ""
	I1202 19:27:44.897856   46141 logs.go:282] 0 containers: []
	W1202 19:27:44.897863   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:44.897871   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:44.897881   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:44.966499   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:44.966516   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:44.978644   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:44.978659   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:45.054728   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:45.041553   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.042293   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046104   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.046962   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:45.049337   16329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:45.054738   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:45.054765   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:45.162639   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:45.162660   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.718000   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:47.727890   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:47.727953   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:47.752168   46141 cri.go:89] found id: ""
	I1202 19:27:47.752181   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.752188   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:47.752193   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:47.752253   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:47.776058   46141 cri.go:89] found id: ""
	I1202 19:27:47.776071   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.776078   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:47.776086   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:47.776143   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:47.809050   46141 cri.go:89] found id: ""
	I1202 19:27:47.809065   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.809072   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:47.809078   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:47.809142   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:47.851196   46141 cri.go:89] found id: ""
	I1202 19:27:47.851209   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.851222   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:47.851227   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:47.851285   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:47.877019   46141 cri.go:89] found id: ""
	I1202 19:27:47.877033   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.877039   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:47.877045   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:47.877104   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:47.906595   46141 cri.go:89] found id: ""
	I1202 19:27:47.906609   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.906616   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:47.906621   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:47.906684   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:47.931137   46141 cri.go:89] found id: ""
	I1202 19:27:47.931150   46141 logs.go:282] 0 containers: []
	W1202 19:27:47.931157   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:47.931165   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:47.931175   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:47.960778   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:47.960794   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:48.026698   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:48.026716   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:48.039024   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:48.039040   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:48.104995   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:48.097226   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.097787   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.099441   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.100078   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:48.101639   16443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:48.105014   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:48.105026   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:50.681972   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:50.691952   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:50.692008   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:50.716419   46141 cri.go:89] found id: ""
	I1202 19:27:50.716432   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.716438   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:50.716443   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:50.716497   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:50.743698   46141 cri.go:89] found id: ""
	I1202 19:27:50.743712   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.743718   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:50.743723   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:50.743778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:50.768264   46141 cri.go:89] found id: ""
	I1202 19:27:50.768277   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.768283   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:50.768297   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:50.768354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:50.794403   46141 cri.go:89] found id: ""
	I1202 19:27:50.794428   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.794436   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:50.794441   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:50.794504   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:50.820731   46141 cri.go:89] found id: ""
	I1202 19:27:50.820745   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.820752   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:50.820757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:50.820812   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:50.852081   46141 cri.go:89] found id: ""
	I1202 19:27:50.852094   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.852101   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:50.852106   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:50.852172   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:50.879611   46141 cri.go:89] found id: ""
	I1202 19:27:50.879625   46141 logs.go:282] 0 containers: []
	W1202 19:27:50.879631   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:50.879644   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:50.879654   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:50.906936   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:50.906951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:50.975206   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:50.975223   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:50.985872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:50.985895   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:51.052846   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:51.045333   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.046129   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.047754   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.048056   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:51.049501   16549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:51.052855   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:51.052866   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:53.628857   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:53.638710   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:53.638773   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:53.662581   46141 cri.go:89] found id: ""
	I1202 19:27:53.662595   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.662602   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:53.662607   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:53.662660   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:53.687222   46141 cri.go:89] found id: ""
	I1202 19:27:53.687237   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.687244   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:53.687249   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:53.687306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:53.711983   46141 cri.go:89] found id: ""
	I1202 19:27:53.711996   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.712003   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:53.712009   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:53.712065   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:53.737377   46141 cri.go:89] found id: ""
	I1202 19:27:53.737391   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.737398   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:53.737403   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:53.737456   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:53.765301   46141 cri.go:89] found id: ""
	I1202 19:27:53.765315   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.765321   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:53.765327   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:53.765383   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:53.793518   46141 cri.go:89] found id: ""
	I1202 19:27:53.793531   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.793537   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:53.793542   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:53.793597   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:53.822849   46141 cri.go:89] found id: ""
	I1202 19:27:53.822863   46141 logs.go:282] 0 containers: []
	W1202 19:27:53.822870   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:53.822877   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:53.822887   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:53.854992   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:53.855010   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:53.921075   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:53.921094   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:53.931936   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:53.931951   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:53.995407   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:53.987658   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.988344   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990099   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.990623   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:53.992109   16653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:53.995422   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:53.995432   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.577211   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:56.588419   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:56.588476   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:56.617070   46141 cri.go:89] found id: ""
	I1202 19:27:56.617083   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.617090   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:56.617096   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:56.617149   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:56.644965   46141 cri.go:89] found id: ""
	I1202 19:27:56.644979   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.644986   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:56.644990   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:56.645050   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:56.673885   46141 cri.go:89] found id: ""
	I1202 19:27:56.673899   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.673906   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:56.673911   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:56.673965   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:56.698577   46141 cri.go:89] found id: ""
	I1202 19:27:56.698590   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.698597   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:56.698603   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:56.698659   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:56.727980   46141 cri.go:89] found id: ""
	I1202 19:27:56.727995   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.728001   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:56.728007   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:56.728061   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:56.752295   46141 cri.go:89] found id: ""
	I1202 19:27:56.752309   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.752316   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:56.752321   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:56.752378   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:56.777216   46141 cri.go:89] found id: ""
	I1202 19:27:56.777228   46141 logs.go:282] 0 containers: []
	W1202 19:27:56.777236   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:56.777243   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:27:56.777254   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:27:56.788028   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:56.788043   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:56.868442   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:56.860615   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.861389   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.862944   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.863447   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:56.865085   16744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:56.868452   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:56.868462   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:56.944462   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:56.944480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:56.979950   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:56.979964   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:27:59.548516   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:27:59.558289   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:27:59.558346   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:27:59.581971   46141 cri.go:89] found id: ""
	I1202 19:27:59.581984   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.581991   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:27:59.581997   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:27:59.582054   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:27:59.606472   46141 cri.go:89] found id: ""
	I1202 19:27:59.606485   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.606492   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:27:59.606497   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:27:59.606551   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:27:59.631964   46141 cri.go:89] found id: ""
	I1202 19:27:59.631977   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.631984   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:27:59.631989   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:27:59.632042   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:27:59.657151   46141 cri.go:89] found id: ""
	I1202 19:27:59.657164   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.657171   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:27:59.657177   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:27:59.657232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:27:59.683812   46141 cri.go:89] found id: ""
	I1202 19:27:59.683826   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.683834   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:27:59.683840   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:27:59.683901   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:27:59.712800   46141 cri.go:89] found id: ""
	I1202 19:27:59.712814   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.712821   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:27:59.712826   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:27:59.712900   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:27:59.745829   46141 cri.go:89] found id: ""
	I1202 19:27:59.745842   46141 logs.go:282] 0 containers: []
	W1202 19:27:59.745849   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:27:59.745856   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:27:59.745868   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:27:59.817077   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:27:59.807605   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.808444   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.809916   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.810599   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:27:59.812428   16834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:27:59.817087   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:27:59.817097   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:27:59.907455   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:27:59.907474   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:27:59.935466   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:27:59.935480   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:00.005487   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:00.005511   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.519937   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:02.529900   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:02.529967   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:02.555080   46141 cri.go:89] found id: ""
	I1202 19:28:02.555093   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.555099   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:02.555105   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:02.555160   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:02.579988   46141 cri.go:89] found id: ""
	I1202 19:28:02.580002   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.580009   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:02.580015   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:02.580069   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:02.604847   46141 cri.go:89] found id: ""
	I1202 19:28:02.604861   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.604868   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:02.604874   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:02.604937   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:02.629805   46141 cri.go:89] found id: ""
	I1202 19:28:02.629818   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.629825   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:02.629832   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:02.629888   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:02.654310   46141 cri.go:89] found id: ""
	I1202 19:28:02.654324   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.654330   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:02.654336   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:02.654393   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:02.683226   46141 cri.go:89] found id: ""
	I1202 19:28:02.683239   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.683246   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:02.683252   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:02.683306   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:02.707703   46141 cri.go:89] found id: ""
	I1202 19:28:02.707717   46141 logs.go:282] 0 containers: []
	W1202 19:28:02.707724   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:02.707732   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:02.707741   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:02.783085   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:02.783103   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:02.829513   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:02.829528   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:02.903215   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:02.903231   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:02.914284   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:02.914302   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:02.974963   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:02.967707   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.968085   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.969699   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.970246   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:02.971730   16967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.475826   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:05.485953   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:05.486009   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:05.512427   46141 cri.go:89] found id: ""
	I1202 19:28:05.512440   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.512447   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:05.512453   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:05.512509   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:05.536678   46141 cri.go:89] found id: ""
	I1202 19:28:05.536691   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.536698   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:05.536703   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:05.536757   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:05.561732   46141 cri.go:89] found id: ""
	I1202 19:28:05.561745   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.561752   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:05.561757   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:05.561810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:05.585989   46141 cri.go:89] found id: ""
	I1202 19:28:05.586003   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.586010   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:05.586015   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:05.586073   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:05.611860   46141 cri.go:89] found id: ""
	I1202 19:28:05.611891   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.611899   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:05.611904   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:05.611969   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:05.637502   46141 cri.go:89] found id: ""
	I1202 19:28:05.637516   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.637523   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:05.637528   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:05.637583   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:05.662486   46141 cri.go:89] found id: ""
	I1202 19:28:05.662499   46141 logs.go:282] 0 containers: []
	W1202 19:28:05.662506   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:05.662514   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:05.662525   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:05.727597   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:05.727615   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:05.738294   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:05.738309   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:05.810066   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:05.802014   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.802691   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804141   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.804634   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:05.806227   17055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:05.810076   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:05.810088   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:05.892482   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:05.892506   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:08.423125   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:08.433033   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:08.433090   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:08.458175   46141 cri.go:89] found id: ""
	I1202 19:28:08.458189   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.458195   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:08.458201   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:08.458257   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:08.483893   46141 cri.go:89] found id: ""
	I1202 19:28:08.483906   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.483913   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:08.483918   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:08.483974   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:08.507923   46141 cri.go:89] found id: ""
	I1202 19:28:08.507937   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.507953   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:08.507964   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:08.508081   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:08.537015   46141 cri.go:89] found id: ""
	I1202 19:28:08.537030   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.537041   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:08.537046   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:08.537102   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:08.562386   46141 cri.go:89] found id: ""
	I1202 19:28:08.562399   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.562405   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:08.562410   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:08.562464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:08.589367   46141 cri.go:89] found id: ""
	I1202 19:28:08.589380   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.589387   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:08.589392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:08.589446   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:08.614763   46141 cri.go:89] found id: ""
	I1202 19:28:08.614776   46141 logs.go:282] 0 containers: []
	W1202 19:28:08.614782   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:08.614790   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:08.614806   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:08.680003   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:08.680020   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:08.691092   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:08.691108   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:08.758435   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:08.751102   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.751837   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.753302   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.754013   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:08.755221   17165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:08.758444   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:08.758455   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:08.838206   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:08.838225   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.377402   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:11.387381   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:11.387443   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:11.416000   46141 cri.go:89] found id: ""
	I1202 19:28:11.416013   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.416020   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:11.416025   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:11.416086   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:11.440887   46141 cri.go:89] found id: ""
	I1202 19:28:11.440900   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.440907   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:11.440913   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:11.440980   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:11.469507   46141 cri.go:89] found id: ""
	I1202 19:28:11.469520   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.469527   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:11.469533   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:11.469589   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:11.494304   46141 cri.go:89] found id: ""
	I1202 19:28:11.494324   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.494331   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:11.494337   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:11.494395   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:11.519823   46141 cri.go:89] found id: ""
	I1202 19:28:11.519836   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.519843   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:11.519848   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:11.519905   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:11.544959   46141 cri.go:89] found id: ""
	I1202 19:28:11.544972   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.544980   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:11.544985   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:11.545043   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:11.569409   46141 cri.go:89] found id: ""
	I1202 19:28:11.569422   46141 logs.go:282] 0 containers: []
	W1202 19:28:11.569429   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:11.569437   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:11.569449   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:11.605867   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:11.605883   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:11.672817   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:11.672835   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:11.683920   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:11.683937   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:11.748483   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:11.740906   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.741544   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743071   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.743398   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:11.744924   17278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:11.748494   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:11.748505   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:14.328100   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:14.338319   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:14.338385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:14.368273   46141 cri.go:89] found id: ""
	I1202 19:28:14.368287   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.368293   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:14.368299   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:14.368353   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:14.393695   46141 cri.go:89] found id: ""
	I1202 19:28:14.393708   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.393715   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:14.393720   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:14.393778   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:14.419532   46141 cri.go:89] found id: ""
	I1202 19:28:14.419546   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.419552   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:14.419558   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:14.419611   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:14.444792   46141 cri.go:89] found id: ""
	I1202 19:28:14.444806   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.444812   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:14.444818   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:14.444874   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:14.473002   46141 cri.go:89] found id: ""
	I1202 19:28:14.473015   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.473022   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:14.473027   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:14.473082   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:14.500557   46141 cri.go:89] found id: ""
	I1202 19:28:14.500570   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.500577   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:14.500583   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:14.500639   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:14.531570   46141 cri.go:89] found id: ""
	I1202 19:28:14.531583   46141 logs.go:282] 0 containers: []
	W1202 19:28:14.531591   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:14.531598   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:14.531608   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:14.563367   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:14.563385   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:14.629330   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:14.629348   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:14.640467   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:14.640482   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:14.703192   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:14.695736   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.696103   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697646   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.697966   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:14.699508   17382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:14.703201   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:14.703212   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.280934   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:17.290754   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:17.290816   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:17.315632   46141 cri.go:89] found id: ""
	I1202 19:28:17.315645   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.315652   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:17.315657   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:17.315715   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:17.339240   46141 cri.go:89] found id: ""
	I1202 19:28:17.339256   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.339281   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:17.339304   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:17.339361   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:17.362387   46141 cri.go:89] found id: ""
	I1202 19:28:17.362401   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.362408   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:17.362415   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:17.362471   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:17.388183   46141 cri.go:89] found id: ""
	I1202 19:28:17.388197   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.388204   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:17.388209   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:17.388264   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:17.417561   46141 cri.go:89] found id: ""
	I1202 19:28:17.417575   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.417582   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:17.417588   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:17.417643   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:17.441561   46141 cri.go:89] found id: ""
	I1202 19:28:17.441574   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.441581   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:17.441596   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:17.441678   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:17.467464   46141 cri.go:89] found id: ""
	I1202 19:28:17.467477   46141 logs.go:282] 0 containers: []
	W1202 19:28:17.467483   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:17.467491   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:17.467501   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:17.543368   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:17.543386   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:17.574792   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:17.574807   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:17.641345   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:17.641363   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:17.651872   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:17.651892   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:17.719233   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:17.711006   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.711827   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713430   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.713940   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:17.715763   17489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.219437   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:20.229376   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:20.229437   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:20.254960   46141 cri.go:89] found id: ""
	I1202 19:28:20.254973   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.254980   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:20.254985   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:20.255048   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:20.280663   46141 cri.go:89] found id: ""
	I1202 19:28:20.280676   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.280683   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:20.280688   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:20.280744   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:20.309275   46141 cri.go:89] found id: ""
	I1202 19:28:20.309288   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.309295   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:20.309300   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:20.309354   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:20.334255   46141 cri.go:89] found id: ""
	I1202 19:28:20.334268   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.334275   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:20.334281   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:20.334334   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:20.359290   46141 cri.go:89] found id: ""
	I1202 19:28:20.359303   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.359310   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:20.359330   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:20.359385   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:20.387906   46141 cri.go:89] found id: ""
	I1202 19:28:20.387919   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.387931   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:20.387937   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:20.387995   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:20.412377   46141 cri.go:89] found id: ""
	I1202 19:28:20.412391   46141 logs.go:282] 0 containers: []
	W1202 19:28:20.412398   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:20.412406   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:20.412421   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:20.478975   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:20.478994   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:20.491271   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:20.491286   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:20.559186   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:20.551818   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.552365   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554024   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.554320   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:20.555835   17582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:20.559197   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:20.559208   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:20.635117   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:20.635135   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:23.163845   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:23.174025   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:23.174084   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:23.198952   46141 cri.go:89] found id: ""
	I1202 19:28:23.198965   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.198972   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:23.198977   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:23.199040   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:23.227109   46141 cri.go:89] found id: ""
	I1202 19:28:23.227122   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.227128   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:23.227133   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:23.227194   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:23.252085   46141 cri.go:89] found id: ""
	I1202 19:28:23.252099   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.252106   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:23.252111   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:23.252178   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:23.282041   46141 cri.go:89] found id: ""
	I1202 19:28:23.282054   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.282061   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:23.282066   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:23.282120   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:23.306149   46141 cri.go:89] found id: ""
	I1202 19:28:23.306163   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.306170   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:23.306176   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:23.306231   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:23.330130   46141 cri.go:89] found id: ""
	I1202 19:28:23.330143   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.330158   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:23.330165   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:23.330232   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:23.354289   46141 cri.go:89] found id: ""
	I1202 19:28:23.354303   46141 logs.go:282] 0 containers: []
	W1202 19:28:23.354309   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:23.354317   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:23.354327   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:23.421463   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:23.421481   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:23.432425   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:23.432442   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:23.499162   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:23.491387   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.491769   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493283   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.493585   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:23.495084   17688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:23.499185   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:23.499198   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:23.574769   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:23.574787   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.102251   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:26.112999   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:26.113059   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:26.139511   46141 cri.go:89] found id: ""
	I1202 19:28:26.139527   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.139534   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:26.139539   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:26.139595   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:26.163810   46141 cri.go:89] found id: ""
	I1202 19:28:26.163823   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.163830   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:26.163845   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:26.163903   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:26.195678   46141 cri.go:89] found id: ""
	I1202 19:28:26.195691   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.195716   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:26.195721   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:26.195784   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:26.221498   46141 cri.go:89] found id: ""
	I1202 19:28:26.221512   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.221519   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:26.221524   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:26.221591   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:26.246377   46141 cri.go:89] found id: ""
	I1202 19:28:26.246391   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.246397   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:26.246402   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:26.246464   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:26.270652   46141 cri.go:89] found id: ""
	I1202 19:28:26.270665   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.270673   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:26.270678   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:26.270763   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:26.296694   46141 cri.go:89] found id: ""
	I1202 19:28:26.296707   46141 logs.go:282] 0 containers: []
	W1202 19:28:26.296714   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:26.296722   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:26.296735   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:26.371620   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:26.362743   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.363658   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365278   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.365832   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:26.367443   17786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:26.371631   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:26.371641   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:26.451711   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:26.451734   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:26.483175   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:26.483191   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:26.549681   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:26.549701   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.061808   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:29.072772   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:28:29.072827   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:28:29.101985   46141 cri.go:89] found id: ""
	I1202 19:28:29.101999   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.102006   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:28:29.102013   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:28:29.102074   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:28:29.128784   46141 cri.go:89] found id: ""
	I1202 19:28:29.128797   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.128803   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:28:29.128808   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:28:29.128862   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:28:29.156726   46141 cri.go:89] found id: ""
	I1202 19:28:29.156740   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.156747   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:28:29.156753   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:28:29.156810   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:28:29.186146   46141 cri.go:89] found id: ""
	I1202 19:28:29.186159   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.186167   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:28:29.186173   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:28:29.186230   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:28:29.210367   46141 cri.go:89] found id: ""
	I1202 19:28:29.210381   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.210387   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:28:29.210392   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:28:29.210448   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:28:29.234607   46141 cri.go:89] found id: ""
	I1202 19:28:29.234620   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.234635   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:28:29.234641   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:28:29.234695   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:28:29.260124   46141 cri.go:89] found id: ""
	I1202 19:28:29.260137   46141 logs.go:282] 0 containers: []
	W1202 19:28:29.260144   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:28:29.260151   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:28:29.260161   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:28:29.270869   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:28:29.270885   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:28:29.335425   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:28:29.327338   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.328130   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.329607   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.330178   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:28:29.332063   17898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:28:29.335435   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:28:29.335448   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:28:29.416026   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:28:29.416053   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:28:29.444738   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:28:29.444757   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 19:28:32.015450   46141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:28:32.028692   46141 kubeadm.go:602] duration metric: took 4m2.303606504s to restartPrimaryControlPlane
	W1202 19:28:32.028752   46141 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 19:28:32.028882   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:28:32.448460   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:28:32.461105   46141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:28:32.468953   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:28:32.469018   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:28:32.476620   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:28:32.476629   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:28:32.476680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:28:32.484342   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:28:32.484396   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:28:32.491816   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:28:32.499468   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:28:32.499526   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:28:32.506680   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.513998   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:28:32.514056   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:28:32.521915   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:28:32.529746   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:28:32.529813   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:28:32.537427   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:28:32.575514   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:28:32.575563   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:28:32.649801   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:28:32.649866   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:28:32.649900   46141 kubeadm.go:319] OS: Linux
	I1202 19:28:32.649943   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:28:32.649990   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:28:32.650036   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:28:32.650083   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:28:32.650129   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:28:32.650176   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:28:32.650220   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:28:32.650266   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:28:32.650311   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:28:32.711361   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:28:32.711478   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:28:32.711574   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:28:32.719716   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:28:32.725408   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:28:32.725506   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:28:32.725580   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:28:32.725675   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:28:32.725741   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:28:32.725818   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:28:32.725877   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:28:32.725939   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:28:32.726006   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:28:32.726085   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:28:32.726169   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:28:32.726206   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:28:32.726266   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:28:32.962990   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:28:33.139589   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:28:33.816592   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:28:34.040085   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:28:34.279545   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:28:34.280074   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:28:34.282763   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:28:34.285708   46141 out.go:252]   - Booting up control plane ...
	I1202 19:28:34.285809   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:28:34.285891   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:28:34.288012   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:28:34.303407   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:28:34.303530   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:28:34.311292   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:28:34.311561   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:28:34.311687   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:28:34.441389   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:28:34.442903   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:32:34.442631   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001443729s
	I1202 19:32:34.442655   46141 kubeadm.go:319] 
	I1202 19:32:34.442716   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:32:34.442751   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:32:34.442868   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:32:34.442876   46141 kubeadm.go:319] 
	I1202 19:32:34.443019   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:32:34.443050   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:32:34.443105   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:32:34.443119   46141 kubeadm.go:319] 
	I1202 19:32:34.446600   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:32:34.447010   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:32:34.447116   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:32:34.447358   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:32:34.447364   46141 kubeadm.go:319] 
	I1202 19:32:34.447431   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 19:32:34.447530   46141 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001443729s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 19:32:34.447615   46141 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 19:32:34.857158   46141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:32:34.869767   46141 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 19:32:34.869822   46141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:32:34.877453   46141 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:32:34.877463   46141 kubeadm.go:158] found existing configuration files:
	
	I1202 19:32:34.877520   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 19:32:34.885001   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:32:34.885057   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:32:34.892315   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 19:32:34.899801   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:32:34.899854   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:32:34.907104   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.914843   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:32:34.914905   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:32:34.922357   46141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 19:32:34.930005   46141 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:32:34.930062   46141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:32:34.937883   46141 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 19:32:34.977710   46141 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 19:32:34.977941   46141 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:32:35.052803   46141 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 19:32:35.052872   46141 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 19:32:35.052916   46141 kubeadm.go:319] OS: Linux
	I1202 19:32:35.052967   46141 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 19:32:35.053025   46141 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 19:32:35.053081   46141 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 19:32:35.053132   46141 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 19:32:35.053189   46141 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 19:32:35.053247   46141 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 19:32:35.053296   46141 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 19:32:35.053361   46141 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 19:32:35.053405   46141 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 19:32:35.129057   46141 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:32:35.129160   46141 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:32:35.129249   46141 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:32:35.136437   46141 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:32:35.141766   46141 out.go:252]   - Generating certificates and keys ...
	I1202 19:32:35.141858   46141 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:32:35.141951   46141 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:32:35.142045   46141 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 19:32:35.142120   46141 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 19:32:35.142195   46141 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 19:32:35.142254   46141 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 19:32:35.142330   46141 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 19:32:35.142391   46141 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 19:32:35.142465   46141 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 19:32:35.142537   46141 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 19:32:35.142573   46141 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 19:32:35.142628   46141 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:32:35.719108   46141 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:32:35.855328   46141 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:32:36.315829   46141 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:32:36.611755   46141 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:32:36.762758   46141 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:32:36.763311   46141 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:32:36.766390   46141 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:32:36.769564   46141 out.go:252]   - Booting up control plane ...
	I1202 19:32:36.769677   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:32:36.769754   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:32:36.771251   46141 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:32:36.785826   46141 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:32:36.785928   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:32:36.793103   46141 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:32:36.793426   46141 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:32:36.793594   46141 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:32:36.913663   46141 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:32:36.913775   46141 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:36:36.914797   46141 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001215513s
	I1202 19:36:36.914820   46141 kubeadm.go:319] 
	I1202 19:36:36.914918   46141 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 19:36:36.915114   46141 kubeadm.go:319] 	- The kubelet is not running
	I1202 19:36:36.915295   46141 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 19:36:36.915303   46141 kubeadm.go:319] 
	I1202 19:36:36.915482   46141 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 19:36:36.915772   46141 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 19:36:36.915825   46141 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 19:36:36.915828   46141 kubeadm.go:319] 
	I1202 19:36:36.923850   46141 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 19:36:36.924318   46141 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 19:36:36.924432   46141 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:36:36.924695   46141 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 19:36:36.924703   46141 kubeadm.go:319] 
	I1202 19:36:36.924833   46141 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 19:36:36.924858   46141 kubeadm.go:403] duration metric: took 12m7.236978439s to StartCluster
	I1202 19:36:36.924902   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 19:36:36.924959   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 19:36:36.952746   46141 cri.go:89] found id: ""
	I1202 19:36:36.952760   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.952767   46141 logs.go:284] No container was found matching "kube-apiserver"
	I1202 19:36:36.952772   46141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 19:36:36.952828   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 19:36:36.977200   46141 cri.go:89] found id: ""
	I1202 19:36:36.977214   46141 logs.go:282] 0 containers: []
	W1202 19:36:36.977221   46141 logs.go:284] No container was found matching "etcd"
	I1202 19:36:36.977226   46141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 19:36:36.977291   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 19:36:37.002232   46141 cri.go:89] found id: ""
	I1202 19:36:37.002246   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.002253   46141 logs.go:284] No container was found matching "coredns"
	I1202 19:36:37.002258   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 19:36:37.002321   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 19:36:37.051601   46141 cri.go:89] found id: ""
	I1202 19:36:37.051615   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.051621   46141 logs.go:284] No container was found matching "kube-scheduler"
	I1202 19:36:37.051626   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 19:36:37.051681   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 19:36:37.102950   46141 cri.go:89] found id: ""
	I1202 19:36:37.102976   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.102983   46141 logs.go:284] No container was found matching "kube-proxy"
	I1202 19:36:37.102988   46141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 19:36:37.103051   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 19:36:37.131342   46141 cri.go:89] found id: ""
	I1202 19:36:37.131355   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.131362   46141 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 19:36:37.131368   46141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 19:36:37.131423   46141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 19:36:37.159192   46141 cri.go:89] found id: ""
	I1202 19:36:37.159206   46141 logs.go:282] 0 containers: []
	W1202 19:36:37.159213   46141 logs.go:284] No container was found matching "kindnet"
	I1202 19:36:37.159221   46141 logs.go:123] Gathering logs for dmesg ...
	I1202 19:36:37.159234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 19:36:37.170095   46141 logs.go:123] Gathering logs for describe nodes ...
	I1202 19:36:37.170110   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 19:36:37.234222   46141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 19:36:37.226220   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.226951   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.228557   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.229059   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:36:37.230654   21692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 19:36:37.234232   46141 logs.go:123] Gathering logs for CRI-O ...
	I1202 19:36:37.234242   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 19:36:37.306216   46141 logs.go:123] Gathering logs for container status ...
	I1202 19:36:37.306234   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 19:36:37.334163   46141 logs.go:123] Gathering logs for kubelet ...
	I1202 19:36:37.334178   46141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 19:36:37.399997   46141 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 19:36:37.400040   46141 out.go:285] * 
	W1202 19:36:37.400110   46141 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.400129   46141 out.go:285] * 
	W1202 19:36:37.402271   46141 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:36:37.407816   46141 out.go:203] 
	W1202 19:36:37.411562   46141 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001215513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 19:36:37.411641   46141 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 19:36:37.411664   46141 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 19:36:37.415811   46141 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546654939Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546834414Z" level=info msg="Starting seccomp notifier watcher"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.546950457Z" level=info msg="Create NRI interface"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.5471107Z" level=info msg="built-in NRI default validator is disabled"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547130474Z" level=info msg="runtime interface created"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.54714466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547151634Z" level=info msg="runtime interface starting up..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547157616Z" level=info msg="starting plugins..."
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547170686Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 19:24:28 functional-374330 crio[10567]: time="2025-12-02T19:24:28.547251727Z" level=info msg="No systemd watchdog enabled"
	Dec 02 19:24:28 functional-374330 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.715009926Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bc19958f-d803-4cd2-a545-4f6c118c1f40 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716039792Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=97921bbe-b2e3-494c-be19-702e5072b6db name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.716591601Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=702ce713-4736-4f82-bd4c-9fc9629fcb4d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717128034Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5900f7cc-9a33-4e7a-8a73-829e63e64047 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.717627973Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0a3735ac-393a-45fe-a0d5-34b181ae2dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718273997Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4854b9da-7f98-4e1b-9a6a-97fc85aeb622 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:28:32 functional-374330 crio[10567]: time="2025-12-02T19:28:32.718754056Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f046502e-805f-4087-97ee-276ea86f9117 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.132448562Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=bfb0729f-fcf5-4cf1-8661-79e44060815d name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133109196Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=59868b2f-ef1f-42db-9580-1c52177e5173 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.133599056Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0dadf3fc-12a7-405c-8560-5fb835ac24e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134131974Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f3eedcce-a194-4413-8ad5-a61c4ca64183 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.134584067Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0d9672a7-dea9-4cd7-b618-4662ee6fbedc name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135094472Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=61806fbf-e06a-40e0-ab81-3632b0f3ac8c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:32:35 functional-374330 crio[10567]: time="2025-12-02T19:32:35.135559257Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e966dc55-aa48-4909-b2a5-1769d8bd5c4c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:38:30.689831   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:30.690629   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:30.692219   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:30.692507   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:30.694092   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:38:30 up  1:20,  0 user,  load average: 0.21, 0.27, 0.29
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:38:28 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:28 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1109.
	Dec 02 19:38:28 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:28 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:28 functional-374330 kubelet[23052]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:28 functional-374330 kubelet[23052]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:28 functional-374330 kubelet[23052]: E1202 19:38:28.839032   23052 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:28 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:28 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:29 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1110.
	Dec 02 19:38:29 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:29 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:29 functional-374330 kubelet[23057]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:29 functional-374330 kubelet[23057]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:29 functional-374330 kubelet[23057]: E1202 19:38:29.601423   23057 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:29 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:29 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:30 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1111.
	Dec 02 19:38:30 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:30 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:30 functional-374330 kubelet[23080]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:30 functional-374330 kubelet[23080]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:30 functional-374330 kubelet[23080]: E1202 19:38:30.346848   23080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:30 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:30 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (330.231435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 19:36:55.944011    4470 retry.go:31] will retry after 4.277319996s: Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1202 19:36:57.357687    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 19:37:10.221789    4470 retry.go:31] will retry after 4.264430652s: Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 19:37:24.487653    4470 retry.go:31] will retry after 4.767561704s: Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 19:37:39.256338    4470 retry.go:31] will retry after 10.091357673s: Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 19:37:59.348867    4470 retry.go:31] will retry after 19.614367554s: Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (297.371654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (309.054099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-374330 image rm kicbase/echo-server:functional-374330 --alsologtostderr                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image          │ functional-374330 image ls                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image          │ functional-374330 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image          │ functional-374330 image save --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ start          │ -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ start          │ -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ start          │ -p functional-374330 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-374330 --alsologtostderr -v=1                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh            │ functional-374330 ssh sudo cat /etc/ssl/certs/4470.pem                                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /usr/share/ca-certificates/4470.pem                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /etc/ssl/certs/44702.pem                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /usr/share/ca-certificates/44702.pem                                                                           │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh sudo cat /etc/test/nested/copy/4470/hosts                                                                               │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image          │ functional-374330 image ls --format short --alsologtostderr                                                                                   │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image          │ functional-374330 image ls --format yaml --alsologtostderr                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh            │ functional-374330 ssh pgrep buildkitd                                                                                                         │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ image          │ functional-374330 image build -t localhost/my-image:functional-374330 testdata/build --alsologtostderr                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:39 UTC │
	│ image          │ functional-374330 image ls                                                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	│ image          │ functional-374330 image ls --format json --alsologtostderr                                                                                    │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	│ image          │ functional-374330 image ls --format table --alsologtostderr                                                                                   │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	│ update-context │ functional-374330 update-context --alsologtostderr -v=2                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	│ update-context │ functional-374330 update-context --alsologtostderr -v=2                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	│ update-context │ functional-374330 update-context --alsologtostderr -v=2                                                                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:39 UTC │ 02 Dec 25 19:39 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:38:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:38:53.228034   64453 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:53.228160   64453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.228172   64453 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:53.228176   64453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.228427   64453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:53.228783   64453 out.go:368] Setting JSON to false
	I1202 19:38:53.229577   64453 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4872,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:38:53.229645   64453 start.go:143] virtualization:  
	I1202 19:38:53.232943   64453 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:38:53.235895   64453 notify.go:221] Checking for updates...
	I1202 19:38:53.236732   64453 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:38:53.240137   64453 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:38:53.242950   64453 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:38:53.245724   64453 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:38:53.248553   64453 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:38:53.251341   64453 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:38:53.254696   64453 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:53.255303   64453 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:38:53.276424   64453 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:38:53.276532   64453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:53.344863   64453 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.336070516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:53.344964   64453 docker.go:319] overlay module found
	I1202 19:38:53.348139   64453 out.go:179] * Using the docker driver based on existing profile
	I1202 19:38:53.350863   64453 start.go:309] selected driver: docker
	I1202 19:38:53.350879   64453 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:53.351022   64453 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:38:53.351137   64453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:53.405866   64453 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.396812701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:53.406270   64453 cni.go:84] Creating CNI manager for ""
	I1202 19:38:53.406342   64453 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 19:38:53.406382   64453 start.go:353] cluster config:
	{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:53.409346   64453 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.777069151Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0b5acd9e-3dc2-4d8e-bdd5-4eea4b6dba9b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800434178Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.8005674Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800604847Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.623558249Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=c3d5d0bb-6081-41b1-93fe-5ad0cc5cb721 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647477581Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647625318Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647666326Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671059718Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671198462Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671239421Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.728365711Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=4bbd67ea-391b-43a5-b118-a6fcfbfb2e41 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.751873881Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752015472Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752053206Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778816904Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778943234Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778980575Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.556483837Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=cedd96f0-1f8c-4d01-a073-f9a1fec94943 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587720583Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587870527Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587912462Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615511735Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615657093Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615696862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:40:47.583301   25949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:40:47.584379   25949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:40:47.585269   25949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:40:47.586732   25949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:40:47.587131   25949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:40:47 up  1:23,  0 user,  load average: 0.36, 0.33, 0.31
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:40:44 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:40:45 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1291.
	Dec 02 19:40:45 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:45 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:45 functional-374330 kubelet[25822]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:45 functional-374330 kubelet[25822]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:45 functional-374330 kubelet[25822]: E1202 19:40:45.584902   25822 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:40:45 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:40:45 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:40:46 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 02 19:40:46 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:46 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:46 functional-374330 kubelet[25827]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:46 functional-374330 kubelet[25827]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:46 functional-374330 kubelet[25827]: E1202 19:40:46.337871   25827 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:40:46 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:40:46 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:40:47 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 02 19:40:47 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:47 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:40:47 functional-374330 kubelet[25863]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:47 functional-374330 kubelet[25863]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:40:47 functional-374330 kubelet[25863]: E1202 19:40:47.107381   25863 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:40:47 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:40:47 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (289.741898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-374330 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-374330 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (68.514598ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-374330 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-374330
helpers_test.go:243: (dbg) docker inspect functional-374330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	        "Created": "2025-12-02T19:09:37.10540907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:09:37.141649935Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd/49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd-json.log",
	        "Name": "/functional-374330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-374330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-374330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49c47a18c15f4ac2c560ea53a369a78b0a246f369315c82edb43e6a28d533cbd",
	                "LowerDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0119475d31a663f4d42a928046a41b6b47256d7e2c32ae065c1dcf9d3e5d8bdc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-374330",
	                "Source": "/var/lib/docker/volumes/functional-374330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-374330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-374330",
	                "name.minikube.sigs.k8s.io": "functional-374330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c86195a203547a5a9a6ce196238fd75d1a363d52417618060d09f3c2e5f431a4",
	            "SandboxKey": "/var/run/docker/netns/c86195a20354",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-374330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:31:2e:93:78:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "33b4355fc4e541626988c817dba657e95f295667d22f1a20b0ae93969c741f30",
	                    "EndpointID": "abd4be168e3abc0793d36d91e71ea2f14e963a0515f674475146101b83883de9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-374330",
	                        "49c47a18c15f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-374330 -n functional-374330: exit status 2 (306.592648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-374330 ssh findmnt -T /mount-9p | grep 9p                                                                                                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh -- ls -la /mount-9p                                                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh sudo umount -f /mount-9p                                                                                                            │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount1 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount3 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount1                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ mount   │ -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount2 --alsologtostderr -v=1                      │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh findmnt -T /mount1                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh findmnt -T /mount2                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ ssh     │ functional-374330 ssh findmnt -T /mount3                                                                                                                  │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ mount   │ -p functional-374330 --kill=true                                                                                                                          │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh sudo systemctl is-active docker                                                                                                     │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ ssh     │ functional-374330 ssh sudo systemctl is-active containerd                                                                                                 │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	│ image   │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image save kicbase/echo-server:functional-374330 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image rm kicbase/echo-server:functional-374330 --alsologtostderr                                                                        │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image ls                                                                                                                                │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ image   │ functional-374330 image save --daemon kicbase/echo-server:functional-374330 --alsologtostderr                                                             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │ 02 Dec 25 19:38 UTC │
	│ start   │ -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-374330 │ jenkins │ v1.37.0 │ 02 Dec 25 19:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:38:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:38:51.473946   64064 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:51.474143   64064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:51.474169   64064 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:51.474189   64064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:51.474586   64064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:51.475034   64064 out.go:368] Setting JSON to false
	I1202 19:38:51.475886   64064 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4870,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:38:51.475981   64064 start.go:143] virtualization:  
	I1202 19:38:51.479620   64064 out.go:179] * [functional-374330] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 19:38:51.483780   64064 notify.go:221] Checking for updates...
	I1202 19:38:51.486986   64064 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:38:51.490153   64064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:38:51.493122   64064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:38:51.496060   64064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:38:51.499010   64064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:38:51.501943   64064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:38:51.505387   64064 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:51.505980   64064 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:38:51.533719   64064 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:38:51.533826   64064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:51.591709   64064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:51.583002025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:51.591807   64064 docker.go:319] overlay module found
	I1202 19:38:51.595003   64064 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 19:38:51.597795   64064 start.go:309] selected driver: docker
	I1202 19:38:51.597817   64064 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:51.597927   64064 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:38:51.601377   64064 out.go:203] 
	W1202 19:38:51.604356   64064 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 19:38:51.607359   64064 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.777069151Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0b5acd9e-3dc2-4d8e-bdd5-4eea4b6dba9b name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800434178Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.8005674Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:47 functional-374330 crio[10567]: time="2025-12-02T19:38:47.800604847Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=fbdb22b1-209c-4ece-a995-6e7ecba9e3a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.623558249Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=c3d5d0bb-6081-41b1-93fe-5ad0cc5cb721 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647477581Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647625318Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.647666326Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=0ee58168-f38a-4996-9f31-e4b6ed839cd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671059718Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671198462Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:48 functional-374330 crio[10567]: time="2025-12-02T19:38:48.671239421Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=706927c5-093a-4086-ba60-6174a2ad3ad7 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.728365711Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=4bbd67ea-391b-43a5-b118-a6fcfbfb2e41 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.751873881Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752015472Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.752053206Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=db3ef1ae-844c-473a-ae3b-01e5de9e6574 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778816904Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778943234Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:49 functional-374330 crio[10567]: time="2025-12-02T19:38:49.778980575Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=2d1d518f-2a2a-4e1f-b0ee-dd612cbdcb29 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.556483837Z" level=info msg="Checking image status: kicbase/echo-server:functional-374330" id=cedd96f0-1f8c-4d01-a073-f9a1fec94943 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587720583Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-374330" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587870527Z" level=info msg="Image docker.io/kicbase/echo-server:functional-374330 not found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.587912462Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-374330 found" id=e1e3b761-9b32-47ec-afff-ac19b136d807 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615511735Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-374330" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615657093Z" level=info msg="Image localhost/kicbase/echo-server:functional-374330 not found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:38:50 functional-374330 crio[10567]: time="2025-12-02T19:38:50.615696862Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-374330 found" id=4b60a6ab-ef25-49c2-9022-a7e3e25e3d2a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 19:38:52.591767   24371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:52.592339   24371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:52.593394   24371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:52.593927   24371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 19:38:52.595413   24371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:38:52 up  1:21,  0 user,  load average: 0.61, 0.36, 0.33
	Linux functional-374330 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 19:38:49 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:50 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 02 19:38:50 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:50 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:50 functional-374330 kubelet[24180]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:50 functional-374330 kubelet[24180]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:50 functional-374330 kubelet[24180]: E1202 19:38:50.619626   24180 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:50 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:50 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:51 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 02 19:38:51 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:51 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:51 functional-374330 kubelet[24251]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:51 functional-374330 kubelet[24251]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:51 functional-374330 kubelet[24251]: E1202 19:38:51.358587   24251 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:51 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:51 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 02 19:38:52 functional-374330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:52 functional-374330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 19:38:52 functional-374330 kubelet[24290]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:52 functional-374330 kubelet[24290]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 19:38:52 functional-374330 kubelet[24290]: E1202 19:38:52.101887   24290 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 19:38:52 functional-374330 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-374330 -n functional-374330: exit status 2 (326.327109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-374330" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1202 19:36:45.419508   59135 out.go:360] Setting OutFile to fd 1 ...
I1202 19:36:45.423850   59135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:36:45.424078   59135 out.go:374] Setting ErrFile to fd 2...
I1202 19:36:45.424103   59135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:36:45.424636   59135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:36:45.425999   59135 mustload.go:66] Loading cluster: functional-374330
I1202 19:36:45.426473   59135 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:36:45.426998   59135 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:36:45.458200   59135 host.go:66] Checking if "functional-374330" exists ...
I1202 19:36:45.458503   59135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 19:36:45.576070   59135 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:36:45.56580532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 19:36:45.576188   59135 api_server.go:166] Checking apiserver status ...
I1202 19:36:45.576239   59135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 19:36:45.576274   59135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:36:45.601083   59135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
W1202 19:36:45.713024   59135 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 19:36:45.716483   59135 out.go:179] * The control-plane node functional-374330 apiserver is not running: (state=Stopped)
I1202 19:36:45.719360   59135 out.go:179]   To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
stdout: * The control-plane node functional-374330 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-374330"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 59134: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-374330 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-374330 apply -f testdata/testsvc.yaml: exit status 1 (92.758238ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-374330 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (103.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.110.46.155": Temporary Error: Get "http://10.110.46.155": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-374330 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-374330 get svc nginx-svc: exit status 1 (60.451104ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-374330 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (103.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-374330 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-374330 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.88998ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-374330 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 service list: exit status 103 (272.65332ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-374330 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-374330 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-374330 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-374330\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 service list -o json: exit status 103 (278.903551ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-374330 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-374330 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 service --namespace=default --https --url hello-node: exit status 103 (254.189372ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-374330 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-374330 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 service hello-node --url --format={{.IP}}: exit status 103 (258.709039ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-374330 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-374330 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-374330 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-374330\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 service hello-node --url: exit status 103 (248.158975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-374330 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-374330"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-374330 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-374330 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-374330"
functional_test.go:1579: failed to parse "* The control-plane node functional-374330 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-374330\"": parse "* The control-plane node functional-374330 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-374330\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764704316591789491" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764704316591789491" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764704316591789491" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001/test-1764704316591789491
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.073538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:38:36.955128    4470 retry.go:31] will retry after 634.210869ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 19:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 19:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 19:38 test-1764704316591789491
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh cat /mount-9p/test-1764704316591789491
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-374330 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-374330 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (58.115835ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-374330 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (284.2196ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=41865)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  2 19:38 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  2 19:38 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  2 19:38 test-1764704316591789491
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-374330 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:41865
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001:/mount-9p --alsologtostderr -v=1] stderr:
I1202 19:38:36.659663   61482 out.go:360] Setting OutFile to fd 1 ...
I1202 19:38:36.659970   61482 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:36.659983   61482 out.go:374] Setting ErrFile to fd 2...
I1202 19:38:36.659990   61482 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:36.660237   61482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:38:36.660508   61482 mustload.go:66] Loading cluster: functional-374330
I1202 19:38:36.660908   61482 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:36.661429   61482 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:38:36.682289   61482 host.go:66] Checking if "functional-374330" exists ...
I1202 19:38:36.682607   61482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 19:38:36.780099   61482 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:36.769732824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 19:38:36.780309   61482 cli_runner.go:164] Run: docker network inspect functional-374330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 19:38:36.804061   61482 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001 into VM as /mount-9p ...
I1202 19:38:36.807265   61482 out.go:179]   - Mount type:   9p
I1202 19:38:36.810223   61482 out.go:179]   - User ID:      docker
I1202 19:38:36.813323   61482 out.go:179]   - Group ID:     docker
I1202 19:38:36.816292   61482 out.go:179]   - Version:      9p2000.L
I1202 19:38:36.819308   61482 out.go:179]   - Message Size: 262144
I1202 19:38:36.822233   61482 out.go:179]   - Options:      map[]
I1202 19:38:36.827291   61482 out.go:179]   - Bind Address: 192.168.49.1:41865
I1202 19:38:36.830061   61482 out.go:179] * Userspace file server: 
I1202 19:38:36.830383   61482 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 19:38:36.830480   61482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:38:36.851036   61482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:38:36.959643   61482 mount.go:180] unmount for /mount-9p ran successfully
I1202 19:38:36.959674   61482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1202 19:38:36.967907   61482 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41865,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1202 19:38:36.978787   61482 main.go:127] stdlog: ufs.go:141 connected
I1202 19:38:36.978951   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tversion tag 65535 msize 262144 version '9P2000.L'
I1202 19:38:36.978990   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rversion tag 65535 msize 262144 version '9P2000'
I1202 19:38:36.979210   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1202 19:38:36.979273   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rattach tag 0 aqid (3b5bcc e092ecad 'd')
I1202 19:38:36.979521   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1202 19:38:36.979572   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcc e092ecad 'd') m d775 at 0 mt 1764704316 l 4096 t 0 d 0 ext )
I1202 19:38:36.985650   61482 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/.mount-process: {Name:mk7352d74849fd57c7f37a287343890d2b793515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 19:38:36.985870   61482 mount.go:105] mount successful: ""
I1202 19:38:36.989325   61482 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2013609216/001 to /mount-9p
I1202 19:38:36.992286   61482 out.go:203] 
I1202 19:38:36.995131   61482 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1202 19:38:38.125935   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1202 19:38:38.126011   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcc e092ecad 'd') m d775 at 0 mt 1764704316 l 4096 t 0 d 0 ext )
I1202 19:38:38.126342   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 
I1202 19:38:38.126373   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1202 19:38:38.126497   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 1 mode 0
I1202 19:38:38.126546   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (3b5bcc e092ecad 'd') iounit 0
I1202 19:38:38.126689   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1202 19:38:38.126742   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcc e092ecad 'd') m d775 at 0 mt 1764704316 l 4096 t 0 d 0 ext )
I1202 19:38:38.126885   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 0 count 262120
I1202 19:38:38.127016   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 258
I1202 19:38:38.127155   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 261862
I1202 19:38:38.127189   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.127349   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1202 19:38:38.127390   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.127545   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 19:38:38.127613   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bcd e092ecad '') 
I1202 19:38:38.127739   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.127800   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bcd e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.127945   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.127983   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bcd e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.128119   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.128144   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.128280   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'test-1764704316591789491' 
I1202 19:38:38.128342   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bcf e092ecad '') 
I1202 19:38:38.128476   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.128539   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.128651   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.128695   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.128833   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.128860   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.128993   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 19:38:38.129032   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bce e092ecad '') 
I1202 19:38:38.129160   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.129195   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bce e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.129309   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.129344   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bce e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.129473   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.129493   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.129625   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1202 19:38:38.129689   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.129851   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1202 19:38:38.129883   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.385503   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 0:'test-1764704316591789491' 
I1202 19:38:38.385575   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bcf e092ecad '') 
I1202 19:38:38.385756   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 1
I1202 19:38:38.385798   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.385956   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 1 newfid 2 
I1202 19:38:38.386008   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1202 19:38:38.386115   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 2 mode 0
I1202 19:38:38.386178   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (3b5bcf e092ecad '') iounit 0
I1202 19:38:38.386323   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 1
I1202 19:38:38.386359   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.386508   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 0 count 262120
I1202 19:38:38.386552   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 24
I1202 19:38:38.386669   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 24 count 262120
I1202 19:38:38.386697   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.386852   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 2 offset 24 count 262120
I1202 19:38:38.386895   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.387045   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.387081   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.387257   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1202 19:38:38.387283   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.732439   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1202 19:38:38.732515   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcc e092ecad 'd') m d775 at 0 mt 1764704316 l 4096 t 0 d 0 ext )
I1202 19:38:38.732856   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 
I1202 19:38:38.732908   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 
I1202 19:38:38.733049   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Topen tag 0 fid 1 mode 0
I1202 19:38:38.733122   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Ropen tag 0 qid (3b5bcc e092ecad 'd') iounit 0
I1202 19:38:38.733234   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 0
I1202 19:38:38.733277   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5bcc e092ecad 'd') m d775 at 0 mt 1764704316 l 4096 t 0 d 0 ext )
I1202 19:38:38.733438   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 0 count 262120
I1202 19:38:38.733552   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 258
I1202 19:38:38.733714   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 261862
I1202 19:38:38.733743   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.733857   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1202 19:38:38.733883   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.734019   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 19:38:38.734054   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bcd e092ecad '') 
I1202 19:38:38.734162   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.734193   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bcd e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.734327   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.734359   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5bcd e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.734476   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.734500   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.734637   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'test-1764704316591789491' 
I1202 19:38:38.734670   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bcf e092ecad '') 
I1202 19:38:38.734779   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.734822   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.734953   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.734986   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('test-1764704316591789491' 'jenkins' 'jenkins' '' q (3b5bcf e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.735101   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.735124   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.735262   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 19:38:38.735295   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rwalk tag 0 (3b5bce e092ecad '') 
I1202 19:38:38.735404   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.735437   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bce e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.735572   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tstat tag 0 fid 2
I1202 19:38:38.735603   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5bce e092ecad '') m 644 at 0 mt 1764704316 l 24 t 0 d 0 ext )
I1202 19:38:38.735715   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 2
I1202 19:38:38.735743   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.735868   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tread tag 0 fid 1 offset 258 count 262120
I1202 19:38:38.735906   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rread tag 0 count 0
I1202 19:38:38.736033   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 1
I1202 19:38:38.736061   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:38.737177   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1202 19:38:38.737249   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rerror tag 0 ename 'file not found' ecode 0
I1202 19:38:39.006780   61482 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33704 Tclunk tag 0 fid 0
I1202 19:38:39.006831   61482 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33704 Rclunk tag 0
I1202 19:38:39.007864   61482 main.go:127] stdlog: ufs.go:147 disconnected
I1202 19:38:39.029921   61482 out.go:179] * Unmounting /mount-9p ...
I1202 19:38:39.032903   61482 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 19:38:39.040026   61482 mount.go:180] unmount for /mount-9p ran successfully
I1202 19:38:39.040131   61482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/.mount-process: {Name:mk7352d74849fd57c7f37a287343890d2b793515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 19:38:39.043280   61482 out.go:203] 
W1202 19:38:39.046202   61482 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1202 19:38:39.049110   61482 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-374330" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-374330" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-374330
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image load --daemon kicbase/echo-server:functional-374330 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-374330" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image save kicbase/echo-server:functional-374330 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1202 19:38:50.902136   63960 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:50.902319   63960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:50.902333   63960 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:50.902339   63960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:50.902618   63960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:50.903244   63960 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:50.903409   63960 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:50.903977   63960 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
	I1202 19:38:50.921810   63960 ssh_runner.go:195] Run: systemctl --version
	I1202 19:38:50.921861   63960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
	I1202 19:38:50.938508   63960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
	I1202 19:38:51.039999   63960 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1202 19:38:51.040078   63960 cache_images.go:255] Failed to load cached images for "functional-374330": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1202 19:38:51.040101   63960 cache_images.go:267] failed pushing to: functional-374330

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-374330
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image save --daemon kicbase/echo-server:functional-374330 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-374330
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-374330: exit status 1 (15.401088ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-374330

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-374330

                                                
                                                
** /stderr **
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (528.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 stop --alsologtostderr -v 5
E1202 19:46:40.448935    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:46:45.851143    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:46:49.245249    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 stop --alsologtostderr -v 5: (31.779085169s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 start --wait true --alsologtostderr -v 5
E1202 19:46:57.357050    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:47:13.558113    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:48:46.176066    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:51:45.851829    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:51:57.357701    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:53:46.175454    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-791576 start --wait true --alsologtostderr -v 5: exit status 80 (8m14.060779435s)

                                                
                                                
-- stdout --
	* [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-791576-m03" control-plane node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:46:51.075692   85424 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:46:51.075825   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.075836   85424 out.go:374] Setting ErrFile to fd 2...
	I1202 19:46:51.075841   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.076149   85424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:46:51.076551   85424 out.go:368] Setting JSON to false
	I1202 19:46:51.077367   85424 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5349,"bootTime":1764699462,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:46:51.077442   85424 start.go:143] virtualization:  
	I1202 19:46:51.082662   85424 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:46:51.085642   85424 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:46:51.085706   85424 notify.go:221] Checking for updates...
	I1202 19:46:51.091665   85424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:46:51.094539   85424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:51.097403   85424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:46:51.100336   85424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:46:51.103289   85424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:46:51.106849   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:51.106965   85424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:46:51.138890   85424 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:46:51.139003   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.198061   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.188947665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.198169   85424 docker.go:319] overlay module found
	I1202 19:46:51.201303   85424 out.go:179] * Using the docker driver based on existing profile
	I1202 19:46:51.204063   85424 start.go:309] selected driver: docker
	I1202 19:46:51.204087   85424 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.204223   85424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:46:51.204328   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.266558   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.256321599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.266979   85424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:46:51.267013   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:51.267084   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:51.267148   85424 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.272255   85424 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:46:51.275067   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:51.277961   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:51.280789   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:51.280839   85424 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:46:51.280871   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:51.280873   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:51.280964   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:51.280974   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:51.281126   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.300000   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:51.300023   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:51.300050   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:51.300081   85424 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:51.300153   85424 start.go:364] duration metric: took 46.004µs to acquireMachinesLock for "ha-791576"
	I1202 19:46:51.300175   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:51.300183   85424 fix.go:54] fixHost starting: 
	I1202 19:46:51.300454   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.316816   85424 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:46:51.316845   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:51.320143   85424 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:46:51.320230   85424 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:46:51.575902   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.594134   85424 kic.go:430] container "ha-791576" state is running.
	I1202 19:46:51.594514   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:51.619517   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.619754   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:51.619817   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:51.639059   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:51.639374   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:51.639778   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:51.641510   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:32813: read: connection reset by peer
	I1202 19:46:54.791183   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.791204   85424 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:46:54.791275   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.809134   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.809441   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.809458   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:46:54.966477   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.966565   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.984050   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.984375   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.984402   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:46:55.137902   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:46:55.137928   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:46:55.138006   85424 ubuntu.go:190] setting up certificates
	I1202 19:46:55.138016   85424 provision.go:84] configureAuth start
	I1202 19:46:55.138084   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:55.155651   85424 provision.go:143] copyHostCerts
	I1202 19:46:55.155701   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155740   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:46:55.155758   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155836   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:46:55.155925   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155955   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:46:55.155965   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155993   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:46:55.156051   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156071   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:46:55.156082   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156108   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:46:55.156162   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:46:55.641637   85424 provision.go:177] copyRemoteCerts
	I1202 19:46:55.641717   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:46:55.641763   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.660498   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:55.765103   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:46:55.765169   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:46:55.782097   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:46:55.782154   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:46:55.798837   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:46:55.798898   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:46:55.816023   85424 provision.go:87] duration metric: took 677.979406ms to configureAuth
	I1202 19:46:55.816052   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:46:55.816326   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:55.816455   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.833499   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:55.833854   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:55.833876   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:46:56.249298   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:46:56.249319   85424 machine.go:97] duration metric: took 4.629549894s to provisionDockerMachine
	I1202 19:46:56.249331   85424 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:46:56.249341   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:46:56.249400   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:46:56.249454   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.268549   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.373420   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:46:56.376533   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:46:56.376562   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:46:56.376586   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:46:56.376642   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:46:56.376760   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:46:56.376771   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:46:56.376874   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:46:56.383745   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:56.400262   85424 start.go:296] duration metric: took 150.916843ms for postStartSetup
	I1202 19:46:56.400381   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:46:56.400460   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.420055   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.522566   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:46:56.527172   85424 fix.go:56] duration metric: took 5.226983089s for fixHost
	I1202 19:46:56.527198   85424 start.go:83] releasing machines lock for "ha-791576", held for 5.227032622s
	I1202 19:46:56.527261   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:56.543387   85424 ssh_runner.go:195] Run: cat /version.json
	I1202 19:46:56.543430   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:46:56.543494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.543434   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.561404   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.561708   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.749544   85424 ssh_runner.go:195] Run: systemctl --version
	I1202 19:46:56.755696   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:46:56.790499   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:46:56.794459   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:46:56.794568   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:46:56.801919   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:46:56.801941   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:46:56.801971   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:46:56.802028   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:46:56.816910   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:46:56.829587   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:46:56.829715   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:46:56.844766   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:46:56.857092   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:46:56.975356   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:46:57.091555   85424 docker.go:234] disabling docker service ...
	I1202 19:46:57.091665   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:46:57.106660   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:46:57.120539   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:46:57.239669   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:46:57.366517   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:46:57.382471   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:46:57.396694   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:46:57.396813   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.405941   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:46:57.406053   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.415370   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.424417   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.433387   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:46:57.442311   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.451228   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.459398   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.468002   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:46:57.475168   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:46:57.482408   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:57.597548   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:46:57.804313   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:46:57.804451   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:46:57.808320   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:46:57.808445   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:46:57.812025   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:46:57.839390   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:46:57.839543   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.867354   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.901354   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:46:57.904220   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:46:57.920051   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:46:57.923689   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:57.933012   85424 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:46:57.933164   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:57.933217   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.967565   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.967590   85424 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:46:57.967641   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.994848   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.994872   85424 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:46:57.994881   85424 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:46:57.994976   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:46:57.995055   85424 ssh_runner.go:195] Run: crio config
	I1202 19:46:58.061390   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:58.061418   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:58.061446   85424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:46:58.061470   85424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:46:58.061604   85424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:46:58.061624   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:46:58.061690   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:46:58.074421   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:58.074559   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:46:58.074648   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:46:58.083182   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:46:58.083291   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:46:58.091465   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:46:58.104313   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:46:58.118107   85424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:46:58.130768   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:46:58.143041   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:46:58.146530   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:58.155934   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:58.272546   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:46:58.287479   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:46:58.287498   85424 certs.go:195] generating shared ca certs ...
	I1202 19:46:58.287513   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.287678   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:46:58.287718   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:46:58.287725   85424 certs.go:257] generating profile certs ...
	I1202 19:46:58.287810   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:46:58.287835   85424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad
	I1202 19:46:58.287850   85424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1202 19:46:58.432480   85424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad ...
	I1202 19:46:58.432627   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad: {Name:mkc49591a089fa34cc904adb89cfa288cc2b970e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.432873   85424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad ...
	I1202 19:46:58.432910   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad: {Name:mk0be3cbf6db1780ac4ac275259d854f38f2158a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.433068   85424 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:46:58.433251   85424 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:46:58.433443   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:46:58.433477   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:46:58.433511   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:46:58.433556   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:46:58.433591   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:46:58.433624   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:46:58.433685   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:46:58.433721   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:46:58.433750   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:46:58.433833   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:46:58.433893   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:46:58.433920   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:46:58.433994   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:46:58.434052   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:46:58.434132   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:46:58.434225   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:58.434290   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.434337   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.434370   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.443939   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:46:58.463785   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:46:58.486458   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:46:58.508445   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:46:58.530317   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:46:58.548462   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:46:58.568358   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:46:58.586970   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:46:58.604714   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:46:58.627145   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:46:58.645042   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:46:58.663909   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:46:58.676006   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:46:58.681961   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:46:58.689749   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693060   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693152   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.735524   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:46:58.745065   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:46:58.754338   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759068   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759143   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.803928   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:46:58.811507   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:46:58.819506   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823153   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823249   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.865967   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:46:58.874198   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:46:58.878028   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:46:58.919236   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:46:58.961187   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:46:59.007842   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:46:59.061600   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:46:59.127987   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:46:59.207795   85424 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:59.207925   85424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:46:59.207988   85424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:46:59.265803   85424 cri.go:89] found id: "71e9ce78d64661ac6d00283cdb79e431fdb65c5c2f57fa8aaa18d21677420d38"
	I1202 19:46:59.265827   85424 cri.go:89] found id: "a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9"
	I1202 19:46:59.265833   85424 cri.go:89] found id: "0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	I1202 19:46:59.265836   85424 cri.go:89] found id: "392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de"
	I1202 19:46:59.265840   85424 cri.go:89] found id: "a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce"
	I1202 19:46:59.265843   85424 cri.go:89] found id: ""
	I1202 19:46:59.265890   85424 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:46:59.290356   85424 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:46:59Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:46:59.290428   85424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:46:59.301612   85424 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:46:59.301633   85424 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:46:59.301705   85424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:46:59.310893   85424 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:59.311284   85424 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.311384   85424 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:46:59.311696   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.312205   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:46:59.312709   85424 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:46:59.312741   85424 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:46:59.312748   85424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:46:59.312753   85424 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:46:59.312758   85424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:46:59.313075   85424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:46:59.313166   85424 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:46:59.323603   85424 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:46:59.323629   85424 kubeadm.go:602] duration metric: took 21.981794ms to restartPrimaryControlPlane
	I1202 19:46:59.323638   85424 kubeadm.go:403] duration metric: took 115.854562ms to StartCluster
	I1202 19:46:59.323653   85424 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.323714   85424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.324315   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.324515   85424 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:46:59.324543   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:46:59.324556   85424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:46:59.325058   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.330563   85424 out.go:179] * Enabled addons: 
	I1202 19:46:59.333607   85424 addons.go:530] duration metric: took 9.049214ms for enable addons: enabled=[]
	I1202 19:46:59.333674   85424 start.go:247] waiting for cluster config update ...
	I1202 19:46:59.333687   85424 start.go:256] writing updated cluster config ...
	I1202 19:46:59.337224   85424 out.go:203] 
	I1202 19:46:59.340497   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.340616   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.343973   85424 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:46:59.346800   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:59.349828   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:59.352721   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:59.352753   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:59.352862   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:59.352879   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:59.353002   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.353206   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:59.379004   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:59.379030   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:59.379043   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:59.379066   85424 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:59.379121   85424 start.go:364] duration metric: took 35.265µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:46:59.379145   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:59.379150   85424 fix.go:54] fixHost starting: m02
	I1202 19:46:59.379415   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.419284   85424 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:46:59.419317   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:59.422504   85424 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:46:59.422616   85424 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:46:59.837868   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.874389   85424 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:46:59.874756   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:46:59.901234   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.901470   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:59.901529   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:46:59.939434   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:59.939741   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:46:59.939756   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:59.941956   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:47:03.181981   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.182010   85424 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:47:03.182083   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.211290   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.211596   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.211614   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:47:03.424005   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.424078   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.477630   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.477958   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.477977   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:03.677990   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:03.678027   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:03.678048   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:03.678060   85424 provision.go:84] configureAuth start
	I1202 19:47:03.678128   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:03.701231   85424 provision.go:143] copyHostCerts
	I1202 19:47:03.701274   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701304   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:03.701318   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701396   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:03.701478   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701500   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:03.701510   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701537   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:03.701637   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701668   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:03.701674   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701705   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:03.701761   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:47:03.945165   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:03.945235   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:03.945280   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.975366   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.102132   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:04.102208   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:04.134543   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:04.134604   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:04.161226   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:04.161297   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:04.192644   85424 provision.go:87] duration metric: took 514.571013ms to configureAuth
	I1202 19:47:04.192676   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:04.192912   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:04.193014   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.219315   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:04.219619   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:04.219638   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:04.675291   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:04.675356   85424 machine.go:97] duration metric: took 4.773873492s to provisionDockerMachine
	I1202 19:47:04.675373   85424 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:47:04.675386   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:04.675452   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:04.675498   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.694108   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.797554   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:04.800903   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:04.800934   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:04.800945   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:04.801002   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:04.801077   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:04.801089   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:04.801185   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:04.808567   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:04.826419   85424 start.go:296] duration metric: took 151.029848ms for postStartSetup
	I1202 19:47:04.826519   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:04.826573   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.843360   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.943115   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:04.948188   85424 fix.go:56] duration metric: took 5.569031295s for fixHost
	I1202 19:47:04.948214   85424 start.go:83] releasing machines lock for "ha-791576-m02", held for 5.56907917s
	I1202 19:47:04.948279   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:04.970572   85424 out.go:179] * Found network options:
	I1202 19:47:04.973538   85424 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:47:04.976397   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:04.976445   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:04.976513   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:04.976562   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.976885   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:04.976937   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.998993   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.000433   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.146894   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:05.207886   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:05.207960   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:05.215827   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:05.215855   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:05.215923   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:05.215992   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:05.231545   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:05.245040   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:05.245102   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:05.260499   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:05.273511   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:05.399821   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:05.547719   85424 docker.go:234] disabling docker service ...
	I1202 19:47:05.547833   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:05.574826   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:05.600862   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:05.835995   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:06.044894   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:06.061431   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:06.092815   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:06.092932   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.102629   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:06.102737   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.112408   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.122046   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.131510   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:06.140127   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.149293   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.162481   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.173417   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:06.181633   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:06.189368   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:06.407349   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:47:06.656582   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:47:06.656693   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:47:06.660537   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:47:06.660607   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:47:06.664156   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:47:06.693772   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:47:06.693853   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.722024   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.754035   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:47:06.757007   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:47:06.759990   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:47:06.777500   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:47:06.781343   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:06.791187   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:47:06.791444   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:06.791707   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:47:06.808279   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:47:06.808561   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:47:06.808576   85424 certs.go:195] generating shared ca certs ...
	I1202 19:47:06.808596   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:47:06.808787   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:47:06.808843   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:47:06.808854   85424 certs.go:257] generating profile certs ...
	I1202 19:47:06.808932   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:47:06.808997   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7b209479
	I1202 19:47:06.809041   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:47:06.809055   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:47:06.809070   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:47:06.809087   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:47:06.809100   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:47:06.809110   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:47:06.809124   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:47:06.809139   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:47:06.809152   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:47:06.809203   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:47:06.809238   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:47:06.809249   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:47:06.809275   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:47:06.809305   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:47:06.809331   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:47:06.809375   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:06.809409   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:06.809426   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:47:06.809437   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:47:06.809494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:47:06.826818   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:47:06.926038   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:47:06.930094   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:47:06.938514   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:47:06.942246   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:47:06.951163   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:47:06.954843   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:47:06.962999   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:47:06.966675   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:47:06.975178   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:47:06.978885   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:47:06.987509   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:47:06.990939   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:47:06.999005   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:47:07.017141   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:47:07.034232   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:47:07.052223   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:47:07.068874   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:47:07.085118   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:47:07.102568   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:47:07.119624   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:47:07.137149   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:47:07.155661   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:47:07.174795   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:47:07.191770   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:47:07.204561   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:47:07.217443   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:47:07.230339   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:47:07.242695   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:47:07.255417   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:47:07.267762   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:47:07.280304   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:47:07.286551   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:47:07.294800   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298454   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298514   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.338926   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:47:07.346584   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:47:07.354270   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358006   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358069   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.398667   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:47:07.406676   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:47:07.414843   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419161   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419247   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.460207   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:47:07.467798   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:47:07.471321   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:47:07.514285   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:47:07.561278   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:47:07.603224   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:47:07.644697   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:47:07.686079   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:47:07.727346   85424 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:47:07.727470   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:47:07.727522   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:47:07.727601   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:47:07.740480   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:47:07.740546   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:47:07.740622   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:47:07.748776   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:47:07.748850   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:47:07.756859   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:47:07.770007   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:47:07.782397   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:47:07.795978   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:47:07.799804   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:07.808809   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:07.936978   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:07.950174   85424 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:47:07.950576   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:07.954257   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:47:07.957286   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:08.088938   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:08.104389   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:47:08.104523   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:47:08.104787   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	W1202 19:47:18.106667   85424 node_ready.go:55] error getting node "ha-791576-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-791576-m02": net/http: TLS handshake timeout
	I1202 19:47:20.815620   85424 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:47:20.815646   85424 node_ready.go:38] duration metric: took 12.710819831s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:47:20.815659   85424 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:47:20.815715   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.316644   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.816110   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.315948   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.815840   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.316118   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.815903   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.838753   85424 api_server.go:72] duration metric: took 15.888533132s to wait for apiserver process to appear ...
	I1202 19:47:23.838776   85424 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:47:23.838807   85424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:47:23.866609   85424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:47:23.870765   85424 api_server.go:141] control plane version: v1.34.2
	I1202 19:47:23.870793   85424 api_server.go:131] duration metric: took 32.004959ms to wait for apiserver health ...
	I1202 19:47:23.870804   85424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:47:23.889009   85424 system_pods.go:59] 26 kube-system pods found
	I1202 19:47:23.889120   85424 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889176   85424 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889202   85424 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.889222   85424 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.889255   85424 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.889279   85424 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.889300   85424 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.889339   85424 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.889361   85424 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.889396   85424 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889439   85424 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889463   85424 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.889517   85424 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889553   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889589   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.889612   85424 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.889629   85424 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.889649   85424 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.889703   85424 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.889730   85424 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.889767   85424 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.889789   85424 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.889813   85424 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.889853   85424 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.889881   85424 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.889945   85424 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.889982   85424 system_pods.go:74] duration metric: took 19.17073ms to wait for pod list to return data ...
	I1202 19:47:23.890015   85424 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:47:23.903242   85424 default_sa.go:45] found service account: "default"
	I1202 19:47:23.903345   85424 default_sa.go:55] duration metric: took 13.295846ms for default service account to be created ...
	I1202 19:47:23.903390   85424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:47:23.918952   85424 system_pods.go:86] 26 kube-system pods found
	I1202 19:47:23.919047   85424 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919079   85424 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919121   85424 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.919147   85424 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.919165   85424 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.919210   85424 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.919234   85424 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.919257   85424 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.919293   85424 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.919328   85424 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919349   85424 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919407   85424 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.919452   85424 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919498   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919527   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.919571   85424 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.919594   85424 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.919611   85424 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.919658   85424 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.919681   85424 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.919700   85424 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.919737   85424 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.919770   85424 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.919789   85424 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.919824   85424 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.919853   85424 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.919880   85424 system_pods.go:126] duration metric: took 16.439891ms to wait for k8s-apps to be running ...
	I1202 19:47:23.919920   85424 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:47:23.920039   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:47:23.943430   85424 system_svc.go:56] duration metric: took 23.498391ms WaitForService to wait for kubelet
	I1202 19:47:23.943548   85424 kubeadm.go:587] duration metric: took 15.993331779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:47:23.943620   85424 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:47:23.963377   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963414   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963434   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963440   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963444   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963448   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963453   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963456   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963461   85424 node_conditions.go:105] duration metric: took 19.808046ms to run NodePressure ...
	I1202 19:47:23.963474   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:47:23.963497   85424 start.go:256] writing updated cluster config ...
	I1202 19:47:23.966956   85424 out.go:203] 
	I1202 19:47:23.970081   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:23.970200   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:23.973545   85424 out.go:179] * Starting "ha-791576-m03" control-plane node in "ha-791576" cluster
	I1202 19:47:23.977222   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:47:23.980067   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:47:23.982893   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:47:23.982917   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:47:23.982945   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:47:23.983271   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:47:23.983306   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:47:23.983500   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.032012   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:47:24.032039   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:47:24.032056   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:47:24.032084   85424 start.go:360] acquireMachinesLock for ha-791576-m03: {Name:mke11e8197b1eb1f85f8abb689432afa86afcde6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:47:24.032155   85424 start.go:364] duration metric: took 54.948µs to acquireMachinesLock for "ha-791576-m03"
	I1202 19:47:24.032184   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:47:24.032191   85424 fix.go:54] fixHost starting: m03
	I1202 19:47:24.032519   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.061731   85424 fix.go:112] recreateIfNeeded on ha-791576-m03: state=Stopped err=<nil>
	W1202 19:47:24.061757   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:47:24.064925   85424 out.go:252] * Restarting existing docker container for "ha-791576-m03" ...
	I1202 19:47:24.065009   85424 cli_runner.go:164] Run: docker start ha-791576-m03
	I1202 19:47:24.481554   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.511641   85424 kic.go:430] container "ha-791576-m03" state is running.
	I1202 19:47:24.512003   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:24.552004   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.552243   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:47:24.552303   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:24.583210   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:24.583581   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:24.583591   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:47:24.584229   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47380->127.0.0.1:32823: read: connection reset by peer
	I1202 19:47:27.831905   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:27.832023   85424 ubuntu.go:182] provisioning hostname "ha-791576-m03"
	I1202 19:47:27.832106   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:27.866228   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:27.866528   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:27.866538   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m03 && echo "ha-791576-m03" | sudo tee /etc/hostname
	I1202 19:47:28.206271   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:28.206429   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.235744   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:28.236058   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:28.236081   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:28.537696   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:28.537727   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:28.537745   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:28.537786   85424 provision.go:84] configureAuth start
	I1202 19:47:28.537865   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:28.575346   85424 provision.go:143] copyHostCerts
	I1202 19:47:28.575393   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575433   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:28.575445   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575528   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:28.575619   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575644   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:28.575649   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575682   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:28.575735   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575759   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:28.575763   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575791   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:28.575848   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m03 san=[127.0.0.1 192.168.49.4 ha-791576-m03 localhost minikube]
	I1202 19:47:28.737231   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:28.737301   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:28.737343   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.767082   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:28.894686   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:28.894758   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:28.937222   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:28.937295   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:29.025224   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:29.025298   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:29.085079   85424 provision.go:87] duration metric: took 547.273818ms to configureAuth
	I1202 19:47:29.085116   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:29.085371   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:29.085483   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.111990   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:29.112296   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:29.112318   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:29.803395   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:29.803431   85424 machine.go:97] duration metric: took 5.251179236s to provisionDockerMachine
	I1202 19:47:29.803442   85424 start.go:293] postStartSetup for "ha-791576-m03" (driver="docker")
	I1202 19:47:29.803453   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:29.803521   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:29.803574   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.833575   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:29.954416   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:29.960020   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:29.960062   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:29.960082   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:29.960151   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:29.960229   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:29.960240   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:29.960341   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:29.982991   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:30.035283   85424 start.go:296] duration metric: took 231.823498ms for postStartSetup
	I1202 19:47:30.035374   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:30.035419   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.070768   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.190107   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:30.196635   85424 fix.go:56] duration metric: took 6.164437606s for fixHost
	I1202 19:47:30.196666   85424 start.go:83] releasing machines lock for "ha-791576-m03", held for 6.164502097s
	I1202 19:47:30.196744   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:30.237763   85424 out.go:179] * Found network options:
	I1202 19:47:30.240640   85424 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:47:30.243436   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243469   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243493   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243503   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:30.243571   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:30.243615   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.243653   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:30.243712   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.273326   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.286780   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.653045   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:30.787771   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:30.787854   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:30.833087   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:30.833158   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:30.833206   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:30.833279   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:30.864249   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:30.889806   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:30.889863   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:30.917840   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:30.984243   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:31.253878   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:31.593901   85424 docker.go:234] disabling docker service ...
	I1202 19:47:31.594010   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:31.621301   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:31.661349   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:32.003626   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:32.391869   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:32.435757   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:32.493110   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:32.493217   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.524849   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:32.524962   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.565517   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.598569   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.641426   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:32.662712   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.677733   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.714192   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.736481   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:32.750823   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:32.766296   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:33.098331   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:49:03.522289   85424 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.42388116s)
	I1202 19:49:03.522317   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:49:03.522385   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:49:03.526524   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:49:03.526585   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:49:03.530326   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:49:03.571925   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:49:03.572010   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.609479   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.650610   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:49:03.653540   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:49:03.656557   85424 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:49:03.659527   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:49:03.677810   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:49:03.681792   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:03.692859   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:49:03.693117   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:03.693363   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:49:03.709753   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:49:03.710031   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.4
	I1202 19:49:03.710040   85424 certs.go:195] generating shared ca certs ...
	I1202 19:49:03.710054   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:49:03.710179   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:49:03.710223   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:49:03.710229   85424 certs.go:257] generating profile certs ...
	I1202 19:49:03.710306   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:49:03.710371   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7aeb3685
	I1202 19:49:03.710427   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:49:03.710436   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:49:03.710521   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:49:03.710542   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:49:03.710554   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:49:03.710565   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:49:03.710577   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:49:03.710598   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:49:03.710610   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:49:03.710662   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:49:03.710695   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:49:03.710703   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:49:03.710730   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:49:03.710755   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:49:03.710778   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:49:03.710822   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:49:03.711042   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:49:03.711071   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:49:03.711083   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:03.711181   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:49:03.728781   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:49:03.830007   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:49:03.833942   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:49:03.842299   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:49:03.846144   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:49:03.854532   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:49:03.857855   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:49:03.866234   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:49:03.870642   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:49:03.879137   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:49:03.883549   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:49:03.893143   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:49:03.896763   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:49:03.904772   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:49:03.925546   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:49:03.951452   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:49:03.975797   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:49:03.998666   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:49:04.023000   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:49:04.042956   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:49:04.061815   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:49:04.081799   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:49:04.113304   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:49:04.131292   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:49:04.149359   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:49:04.163556   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:49:04.177001   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:49:04.191331   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:49:04.204195   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:49:04.216872   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:49:04.229341   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:49:04.242596   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:49:04.248724   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:49:04.256868   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260467   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260531   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.301235   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:49:04.308894   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:49:04.317175   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320635   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320703   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.362642   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:49:04.371073   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:49:04.379233   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383803   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383867   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.425589   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:49:04.433230   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:49:04.436905   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:49:04.478804   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:49:04.521202   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:49:04.562989   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:49:04.603885   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:49:04.644970   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:49:04.686001   85424 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1202 19:49:04.686142   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:49:04.686175   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:49:04.686225   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:49:04.698332   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:49:04.698392   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:49:04.698462   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:49:04.706596   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:49:04.706697   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:49:04.714019   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:49:04.726439   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:49:04.740943   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:49:04.755477   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:49:04.759442   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:04.769254   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:04.889322   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:04.903380   85424 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:49:04.903723   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:04.907146   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:49:04.910053   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:05.053002   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:05.069583   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:49:05.069742   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:49:05.070007   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m03" to be "Ready" ...
	W1202 19:49:07.074081   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:09.574441   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:12.073995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:14.075158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:16.574109   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:19.074269   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:21.573633   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:24.075532   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:26.573178   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:28.573751   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:30.574196   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:33.074433   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:35.574293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:38.074355   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:40.572995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:42.573766   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:44.574193   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:47.074875   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:49.574182   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:52.073848   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:54.074871   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:56.574461   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:59.074135   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:01.075025   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:03.573959   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:05.574229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:08.073434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:10.075308   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:12.573891   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:14.574258   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:17.075768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:19.574491   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:22.073796   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:24.074628   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:26.574014   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:29.073484   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:31.074366   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:33.077573   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:35.574409   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:38.074415   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:40.076462   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:42.573398   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:44.574236   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:47.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:49.574052   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:51.574295   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:53.574395   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:56.074579   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:58.573990   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:00.574496   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:03.074093   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:05.573622   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:07.574521   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:10.074177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:12.074658   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:14.574234   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:17.073779   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:19.074824   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:21.075177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:23.574226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:25.574533   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:28.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:30.573516   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:32.574725   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:35.073690   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:37.073844   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:39.074254   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:41.074445   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:43.574427   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:46.074495   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:48.075157   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:50.574559   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:53.074039   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:55.075518   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:57.574296   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:00.125095   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:02.573158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:04.574068   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:07.074149   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:09.573261   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:11.574325   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:14.074158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:16.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:18.578414   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:21.074856   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:23.573367   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:25.574018   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:28.073545   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:30.074750   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:32.074791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:34.573792   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:37.073884   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:39.074273   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:41.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:43.573239   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:45.574142   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:48.073730   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:50.074154   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:52.074293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:54.074677   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:56.574118   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:58.575322   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:01.074442   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:03.574221   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:06.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:08.573768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:10.574179   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:13.073867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:15.074575   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:17.581482   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:20.075478   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:22.574434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:25.079089   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:27.574074   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:30.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:32.573125   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:34.573275   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:36.573791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:39.075423   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:41.573386   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:43.573426   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:46.074050   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:48.074229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:50.574069   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:53.073917   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:55.573030   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:57.574590   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:00.099899   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:02.573639   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:04.573928   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:06.574012   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:08.574318   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:11.073394   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:13.074011   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:15.074319   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:17.573595   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:19.574170   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:22.074150   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:24.074500   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:26.573647   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:28.573867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:30.574160   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:33.074365   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:35.074585   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:37.574466   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:40.075645   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:42.573981   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:44.574615   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:46.576226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:49.074146   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:51.074479   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:53.574396   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:56.073822   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:58.074332   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:00.115264   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:02.573371   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:05.070625   85424 node_ready.go:55] error getting node "ha-791576-m03" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 19:55:05.070669   85424 node_ready.go:38] duration metric: took 6m0.000641476s for node "ha-791576-m03" to be "Ready" ...
	I1202 19:55:05.073996   85424 out.go:203] 
	W1202 19:55:05.077043   85424 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:55:05.077067   85424 out.go:285] * 
	* 
	W1202 19:55:05.079288   85424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:55:05.082165   85424 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-791576 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 85549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:46:51.358682133Z",
	            "FinishedAt": "2025-12-02T19:46:50.744519975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d42040ea74c4eeedb7f84e603f4c2848e2cd3d94b7edd53b3686d82839a44349",
	            "SandboxKey": "/var/run/docker/netns/d42040ea74c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f0:35:b9:8a:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "0de808d6cef38a4c373fb171d1e5a929c71554ad4cf487786793c13d6a707020",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.488348861s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:46:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:46:51.075692   85424 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:46:51.075825   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.075836   85424 out.go:374] Setting ErrFile to fd 2...
	I1202 19:46:51.075841   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.076149   85424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:46:51.076551   85424 out.go:368] Setting JSON to false
	I1202 19:46:51.077367   85424 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5349,"bootTime":1764699462,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:46:51.077442   85424 start.go:143] virtualization:  
	I1202 19:46:51.082662   85424 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:46:51.085642   85424 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:46:51.085706   85424 notify.go:221] Checking for updates...
	I1202 19:46:51.091665   85424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:46:51.094539   85424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:51.097403   85424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:46:51.100336   85424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:46:51.103289   85424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:46:51.106849   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:51.106965   85424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:46:51.138890   85424 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:46:51.139003   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.198061   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.188947665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.198169   85424 docker.go:319] overlay module found
	I1202 19:46:51.201303   85424 out.go:179] * Using the docker driver based on existing profile
	I1202 19:46:51.204063   85424 start.go:309] selected driver: docker
	I1202 19:46:51.204087   85424 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.204223   85424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:46:51.204328   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.266558   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.256321599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.266979   85424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:46:51.267013   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:51.267084   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:51.267148   85424 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.272255   85424 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:46:51.275067   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:51.277961   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:51.280789   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:51.280839   85424 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:46:51.280871   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:51.280873   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:51.280964   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:51.280974   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:51.281126   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.300000   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:51.300023   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:51.300050   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:51.300081   85424 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:51.300153   85424 start.go:364] duration metric: took 46.004µs to acquireMachinesLock for "ha-791576"
	I1202 19:46:51.300175   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:51.300183   85424 fix.go:54] fixHost starting: 
	I1202 19:46:51.300454   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.316816   85424 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:46:51.316845   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:51.320143   85424 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:46:51.320230   85424 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:46:51.575902   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.594134   85424 kic.go:430] container "ha-791576" state is running.
	I1202 19:46:51.594514   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:51.619517   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.619754   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:51.619817   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:51.639059   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:51.639374   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:51.639778   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:51.641510   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:32813: read: connection reset by peer
	I1202 19:46:54.791183   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.791204   85424 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:46:54.791275   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.809134   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.809441   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.809458   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:46:54.966477   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.966565   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.984050   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.984375   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.984402   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:46:55.137902   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:46:55.137928   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:46:55.138006   85424 ubuntu.go:190] setting up certificates
	I1202 19:46:55.138016   85424 provision.go:84] configureAuth start
	I1202 19:46:55.138084   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:55.155651   85424 provision.go:143] copyHostCerts
	I1202 19:46:55.155701   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155740   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:46:55.155758   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155836   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:46:55.155925   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155955   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:46:55.155965   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155993   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:46:55.156051   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156071   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:46:55.156082   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156108   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:46:55.156162   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:46:55.641637   85424 provision.go:177] copyRemoteCerts
	I1202 19:46:55.641717   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:46:55.641763   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.660498   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:55.765103   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:46:55.765169   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:46:55.782097   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:46:55.782154   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:46:55.798837   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:46:55.798898   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:46:55.816023   85424 provision.go:87] duration metric: took 677.979406ms to configureAuth
	I1202 19:46:55.816052   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:46:55.816326   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:55.816455   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.833499   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:55.833854   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:55.833876   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:46:56.249298   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:46:56.249319   85424 machine.go:97] duration metric: took 4.629549894s to provisionDockerMachine
	I1202 19:46:56.249331   85424 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:46:56.249341   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:46:56.249400   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:46:56.249454   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.268549   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.373420   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:46:56.376533   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:46:56.376562   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:46:56.376586   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:46:56.376642   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:46:56.376760   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:46:56.376771   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:46:56.376874   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:46:56.383745   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:56.400262   85424 start.go:296] duration metric: took 150.916843ms for postStartSetup
	I1202 19:46:56.400381   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:46:56.400460   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.420055   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.522566   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:46:56.527172   85424 fix.go:56] duration metric: took 5.226983089s for fixHost
	I1202 19:46:56.527198   85424 start.go:83] releasing machines lock for "ha-791576", held for 5.227032622s
	I1202 19:46:56.527261   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:56.543387   85424 ssh_runner.go:195] Run: cat /version.json
	I1202 19:46:56.543430   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:46:56.543494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.543434   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.561404   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.561708   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.749544   85424 ssh_runner.go:195] Run: systemctl --version
	I1202 19:46:56.755696   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:46:56.790499   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:46:56.794459   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:46:56.794568   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:46:56.801919   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:46:56.801941   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:46:56.801971   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:46:56.802028   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:46:56.816910   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:46:56.829587   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:46:56.829715   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:46:56.844766   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:46:56.857092   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:46:56.975356   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:46:57.091555   85424 docker.go:234] disabling docker service ...
	I1202 19:46:57.091665   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:46:57.106660   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:46:57.120539   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:46:57.239669   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:46:57.366517   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:46:57.382471   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:46:57.396694   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:46:57.396813   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.405941   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:46:57.406053   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.415370   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.424417   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.433387   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:46:57.442311   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.451228   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.459398   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.468002   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:46:57.475168   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:46:57.482408   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:57.597548   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:46:57.804313   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:46:57.804451   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:46:57.808320   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:46:57.808445   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:46:57.812025   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:46:57.839390   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:46:57.839543   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.867354   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.901354   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:46:57.904220   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:46:57.920051   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:46:57.923689   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:57.933012   85424 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:46:57.933164   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:57.933217   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.967565   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.967590   85424 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:46:57.967641   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.994848   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.994872   85424 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:46:57.994881   85424 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:46:57.994976   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:46:57.995055   85424 ssh_runner.go:195] Run: crio config
	I1202 19:46:58.061390   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:58.061418   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:58.061446   85424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:46:58.061470   85424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:46:58.061604   85424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:46:58.061624   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:46:58.061690   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:46:58.074421   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:58.074559   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:46:58.074648   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:46:58.083182   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:46:58.083291   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:46:58.091465   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:46:58.104313   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:46:58.118107   85424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:46:58.130768   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:46:58.143041   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:46:58.146530   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:58.155934   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:58.272546   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:46:58.287479   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:46:58.287498   85424 certs.go:195] generating shared ca certs ...
	I1202 19:46:58.287513   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.287678   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:46:58.287718   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:46:58.287725   85424 certs.go:257] generating profile certs ...
	I1202 19:46:58.287810   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:46:58.287835   85424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad
	I1202 19:46:58.287850   85424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1202 19:46:58.432480   85424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad ...
	I1202 19:46:58.432627   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad: {Name:mkc49591a089fa34cc904adb89cfa288cc2b970e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.432873   85424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad ...
	I1202 19:46:58.432910   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad: {Name:mk0be3cbf6db1780ac4ac275259d854f38f2158a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.433068   85424 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:46:58.433251   85424 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:46:58.433443   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:46:58.433477   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:46:58.433511   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:46:58.433556   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:46:58.433591   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:46:58.433624   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:46:58.433685   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:46:58.433721   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:46:58.433750   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:46:58.433833   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:46:58.433893   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:46:58.433920   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:46:58.433994   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:46:58.434052   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:46:58.434132   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:46:58.434225   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:58.434290   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.434337   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.434370   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.443939   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:46:58.463785   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:46:58.486458   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:46:58.508445   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:46:58.530317   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:46:58.548462   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:46:58.568358   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:46:58.586970   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:46:58.604714   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:46:58.627145   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:46:58.645042   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:46:58.663909   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:46:58.676006   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:46:58.681961   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:46:58.689749   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693060   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693152   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.735524   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:46:58.745065   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:46:58.754338   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759068   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759143   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.803928   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:46:58.811507   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:46:58.819506   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823153   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823249   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.865967   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:46:58.874198   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:46:58.878028   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:46:58.919236   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:46:58.961187   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:46:59.007842   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:46:59.061600   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:46:59.127987   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:46:59.207795   85424 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:59.207925   85424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:46:59.207988   85424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:46:59.265803   85424 cri.go:89] found id: "71e9ce78d64661ac6d00283cdb79e431fdb65c5c2f57fa8aaa18d21677420d38"
	I1202 19:46:59.265827   85424 cri.go:89] found id: "a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9"
	I1202 19:46:59.265833   85424 cri.go:89] found id: "0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	I1202 19:46:59.265836   85424 cri.go:89] found id: "392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de"
	I1202 19:46:59.265840   85424 cri.go:89] found id: "a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce"
	I1202 19:46:59.265843   85424 cri.go:89] found id: ""
	I1202 19:46:59.265890   85424 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:46:59.290356   85424 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:46:59Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:46:59.290428   85424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:46:59.301612   85424 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:46:59.301633   85424 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:46:59.301705   85424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:46:59.310893   85424 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:59.311284   85424 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.311384   85424 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:46:59.311696   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.312205   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:46:59.312709   85424 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:46:59.312741   85424 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:46:59.312748   85424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:46:59.312753   85424 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:46:59.312758   85424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:46:59.313075   85424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:46:59.313166   85424 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:46:59.323603   85424 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:46:59.323629   85424 kubeadm.go:602] duration metric: took 21.981794ms to restartPrimaryControlPlane
	I1202 19:46:59.323638   85424 kubeadm.go:403] duration metric: took 115.854562ms to StartCluster
	I1202 19:46:59.323653   85424 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.323714   85424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.324315   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.324515   85424 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:46:59.324543   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:46:59.324556   85424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:46:59.325058   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.330563   85424 out.go:179] * Enabled addons: 
	I1202 19:46:59.333607   85424 addons.go:530] duration metric: took 9.049214ms for enable addons: enabled=[]
	I1202 19:46:59.333674   85424 start.go:247] waiting for cluster config update ...
	I1202 19:46:59.333687   85424 start.go:256] writing updated cluster config ...
	I1202 19:46:59.337224   85424 out.go:203] 
	I1202 19:46:59.340497   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.340616   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.343973   85424 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:46:59.346800   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:59.349828   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:59.352721   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:59.352753   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:59.352862   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:59.352879   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:59.353002   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.353206   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:59.379004   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:59.379030   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:59.379043   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:59.379066   85424 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:59.379121   85424 start.go:364] duration metric: took 35.265µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:46:59.379145   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:59.379150   85424 fix.go:54] fixHost starting: m02
	I1202 19:46:59.379415   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.419284   85424 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:46:59.419317   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:59.422504   85424 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:46:59.422616   85424 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:46:59.837868   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.874389   85424 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:46:59.874756   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:46:59.901234   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.901470   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:59.901529   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:46:59.939434   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:59.939741   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:46:59.939756   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:59.941956   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:47:03.181981   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.182010   85424 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:47:03.182083   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.211290   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.211596   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.211614   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:47:03.424005   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.424078   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.477630   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.477958   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.477977   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:03.677990   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:03.678027   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:03.678048   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:03.678060   85424 provision.go:84] configureAuth start
	I1202 19:47:03.678128   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:03.701231   85424 provision.go:143] copyHostCerts
	I1202 19:47:03.701274   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701304   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:03.701318   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701396   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:03.701478   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701500   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:03.701510   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701537   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:03.701637   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701668   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:03.701674   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701705   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:03.701761   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:47:03.945165   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:03.945235   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:03.945280   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.975366   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.102132   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:04.102208   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:04.134543   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:04.134604   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:04.161226   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:04.161297   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:04.192644   85424 provision.go:87] duration metric: took 514.571013ms to configureAuth
	I1202 19:47:04.192676   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:04.192912   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:04.193014   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.219315   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:04.219619   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:04.219638   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:04.675291   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:04.675356   85424 machine.go:97] duration metric: took 4.773873492s to provisionDockerMachine
	I1202 19:47:04.675373   85424 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:47:04.675386   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:04.675452   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:04.675498   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.694108   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.797554   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:04.800903   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:04.800934   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:04.800945   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:04.801002   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:04.801077   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:04.801089   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:04.801185   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:04.808567   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:04.826419   85424 start.go:296] duration metric: took 151.029848ms for postStartSetup
	I1202 19:47:04.826519   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:04.826573   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.843360   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.943115   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:04.948188   85424 fix.go:56] duration metric: took 5.569031295s for fixHost
	I1202 19:47:04.948214   85424 start.go:83] releasing machines lock for "ha-791576-m02", held for 5.56907917s
	I1202 19:47:04.948279   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:04.970572   85424 out.go:179] * Found network options:
	I1202 19:47:04.973538   85424 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:47:04.976397   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:04.976445   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:04.976513   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:04.976562   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.976885   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:04.976937   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.998993   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.000433   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.146894   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:05.207886   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:05.207960   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:05.215827   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:05.215855   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:05.215923   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:05.215992   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:05.231545   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:05.245040   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:05.245102   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:05.260499   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:05.273511   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:05.399821   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:05.547719   85424 docker.go:234] disabling docker service ...
	I1202 19:47:05.547833   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:05.574826   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:05.600862   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:05.835995   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:06.044894   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:06.061431   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:06.092815   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:06.092932   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.102629   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:06.102737   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.112408   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.122046   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.131510   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:06.140127   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.149293   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.162481   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.173417   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:06.181633   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:06.189368   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:06.407349   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:47:06.656582   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:47:06.656693   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:47:06.660537   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:47:06.660607   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:47:06.664156   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:47:06.693772   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:47:06.693853   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.722024   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.754035   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:47:06.757007   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:47:06.759990   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:47:06.777500   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:47:06.781343   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:06.791187   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:47:06.791444   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:06.791707   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:47:06.808279   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:47:06.808561   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:47:06.808576   85424 certs.go:195] generating shared ca certs ...
	I1202 19:47:06.808596   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:47:06.808787   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:47:06.808843   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:47:06.808854   85424 certs.go:257] generating profile certs ...
	I1202 19:47:06.808932   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:47:06.808997   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7b209479
	I1202 19:47:06.809041   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:47:06.809055   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:47:06.809070   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:47:06.809087   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:47:06.809100   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:47:06.809110   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:47:06.809124   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:47:06.809139   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:47:06.809152   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:47:06.809203   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:47:06.809238   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:47:06.809249   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:47:06.809275   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:47:06.809305   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:47:06.809331   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:47:06.809375   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:06.809409   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:06.809426   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:47:06.809437   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:47:06.809494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:47:06.826818   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:47:06.926038   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:47:06.930094   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:47:06.938514   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:47:06.942246   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:47:06.951163   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:47:06.954843   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:47:06.962999   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:47:06.966675   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:47:06.975178   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:47:06.978885   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:47:06.987509   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:47:06.990939   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:47:06.999005   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:47:07.017141   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:47:07.034232   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:47:07.052223   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:47:07.068874   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:47:07.085118   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:47:07.102568   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:47:07.119624   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:47:07.137149   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:47:07.155661   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:47:07.174795   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:47:07.191770   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:47:07.204561   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:47:07.217443   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:47:07.230339   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:47:07.242695   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:47:07.255417   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:47:07.267762   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:47:07.280304   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:47:07.286551   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:47:07.294800   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298454   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298514   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.338926   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:47:07.346584   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:47:07.354270   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358006   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358069   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.398667   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:47:07.406676   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:47:07.414843   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419161   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419247   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.460207   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:47:07.467798   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:47:07.471321   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:47:07.514285   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:47:07.561278   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:47:07.603224   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:47:07.644697   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:47:07.686079   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:47:07.727346   85424 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:47:07.727470   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:47:07.727522   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:47:07.727601   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:47:07.740480   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:47:07.740546   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:47:07.740622   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:47:07.748776   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:47:07.748850   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:47:07.756859   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:47:07.770007   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:47:07.782397   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:47:07.795978   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:47:07.799804   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:07.808809   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:07.936978   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:07.950174   85424 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:47:07.950576   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:07.954257   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:47:07.957286   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:08.088938   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:08.104389   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:47:08.104523   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:47:08.104787   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	W1202 19:47:18.106667   85424 node_ready.go:55] error getting node "ha-791576-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-791576-m02": net/http: TLS handshake timeout
	I1202 19:47:20.815620   85424 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:47:20.815646   85424 node_ready.go:38] duration metric: took 12.710819831s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:47:20.815659   85424 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:47:20.815715   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.316644   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.816110   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.315948   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.815840   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.316118   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.815903   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.838753   85424 api_server.go:72] duration metric: took 15.888533132s to wait for apiserver process to appear ...
	I1202 19:47:23.838776   85424 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:47:23.838807   85424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:47:23.866609   85424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:47:23.870765   85424 api_server.go:141] control plane version: v1.34.2
	I1202 19:47:23.870793   85424 api_server.go:131] duration metric: took 32.004959ms to wait for apiserver health ...
	I1202 19:47:23.870804   85424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:47:23.889009   85424 system_pods.go:59] 26 kube-system pods found
	I1202 19:47:23.889120   85424 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889176   85424 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889202   85424 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.889222   85424 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.889255   85424 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.889279   85424 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.889300   85424 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.889339   85424 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.889361   85424 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.889396   85424 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889439   85424 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889463   85424 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.889517   85424 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889553   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889589   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.889612   85424 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.889629   85424 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.889649   85424 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.889703   85424 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.889730   85424 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.889767   85424 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.889789   85424 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.889813   85424 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.889853   85424 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.889881   85424 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.889945   85424 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.889982   85424 system_pods.go:74] duration metric: took 19.17073ms to wait for pod list to return data ...
	I1202 19:47:23.890015   85424 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:47:23.903242   85424 default_sa.go:45] found service account: "default"
	I1202 19:47:23.903345   85424 default_sa.go:55] duration metric: took 13.295846ms for default service account to be created ...
	I1202 19:47:23.903390   85424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:47:23.918952   85424 system_pods.go:86] 26 kube-system pods found
	I1202 19:47:23.919047   85424 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919079   85424 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919121   85424 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.919147   85424 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.919165   85424 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.919210   85424 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.919234   85424 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.919257   85424 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.919293   85424 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.919328   85424 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919349   85424 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919407   85424 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.919452   85424 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919498   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919527   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.919571   85424 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.919594   85424 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.919611   85424 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.919658   85424 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.919681   85424 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.919700   85424 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.919737   85424 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.919770   85424 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.919789   85424 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.919824   85424 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.919853   85424 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.919880   85424 system_pods.go:126] duration metric: took 16.439891ms to wait for k8s-apps to be running ...
	I1202 19:47:23.919920   85424 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:47:23.920039   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:47:23.943430   85424 system_svc.go:56] duration metric: took 23.498391ms WaitForService to wait for kubelet
	I1202 19:47:23.943548   85424 kubeadm.go:587] duration metric: took 15.993331779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:47:23.943620   85424 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:47:23.963377   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963414   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963434   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963440   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963444   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963448   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963453   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963456   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963461   85424 node_conditions.go:105] duration metric: took 19.808046ms to run NodePressure ...
	I1202 19:47:23.963474   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:47:23.963497   85424 start.go:256] writing updated cluster config ...
	I1202 19:47:23.966956   85424 out.go:203] 
	I1202 19:47:23.970081   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:23.970200   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:23.973545   85424 out.go:179] * Starting "ha-791576-m03" control-plane node in "ha-791576" cluster
	I1202 19:47:23.977222   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:47:23.980067   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:47:23.982893   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:47:23.982917   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:47:23.982945   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:47:23.983271   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:47:23.983306   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:47:23.983500   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.032012   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:47:24.032039   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:47:24.032056   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:47:24.032084   85424 start.go:360] acquireMachinesLock for ha-791576-m03: {Name:mke11e8197b1eb1f85f8abb689432afa86afcde6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:47:24.032155   85424 start.go:364] duration metric: took 54.948µs to acquireMachinesLock for "ha-791576-m03"
	I1202 19:47:24.032184   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:47:24.032191   85424 fix.go:54] fixHost starting: m03
	I1202 19:47:24.032519   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.061731   85424 fix.go:112] recreateIfNeeded on ha-791576-m03: state=Stopped err=<nil>
	W1202 19:47:24.061757   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:47:24.064925   85424 out.go:252] * Restarting existing docker container for "ha-791576-m03" ...
	I1202 19:47:24.065009   85424 cli_runner.go:164] Run: docker start ha-791576-m03
	I1202 19:47:24.481554   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.511641   85424 kic.go:430] container "ha-791576-m03" state is running.
	I1202 19:47:24.512003   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:24.552004   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.552243   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:47:24.552303   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:24.583210   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:24.583581   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:24.583591   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:47:24.584229   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47380->127.0.0.1:32823: read: connection reset by peer
	I1202 19:47:27.831905   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:27.832023   85424 ubuntu.go:182] provisioning hostname "ha-791576-m03"
	I1202 19:47:27.832106   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:27.866228   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:27.866528   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:27.866538   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m03 && echo "ha-791576-m03" | sudo tee /etc/hostname
	I1202 19:47:28.206271   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:28.206429   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.235744   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:28.236058   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:28.236081   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:28.537696   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:28.537727   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:28.537745   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:28.537786   85424 provision.go:84] configureAuth start
	I1202 19:47:28.537865   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:28.575346   85424 provision.go:143] copyHostCerts
	I1202 19:47:28.575393   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575433   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:28.575445   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575528   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:28.575619   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575644   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:28.575649   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575682   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:28.575735   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575759   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:28.575763   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575791   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:28.575848   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m03 san=[127.0.0.1 192.168.49.4 ha-791576-m03 localhost minikube]
	I1202 19:47:28.737231   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:28.737301   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:28.737343   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.767082   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:28.894686   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:28.894758   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:28.937222   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:28.937295   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:29.025224   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:29.025298   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:29.085079   85424 provision.go:87] duration metric: took 547.273818ms to configureAuth
	I1202 19:47:29.085116   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:29.085371   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:29.085483   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.111990   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:29.112296   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:29.112318   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:29.803395   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:29.803431   85424 machine.go:97] duration metric: took 5.251179236s to provisionDockerMachine
	I1202 19:47:29.803442   85424 start.go:293] postStartSetup for "ha-791576-m03" (driver="docker")
	I1202 19:47:29.803453   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:29.803521   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:29.803574   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.833575   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:29.954416   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:29.960020   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:29.960062   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:29.960082   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:29.960151   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:29.960229   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:29.960240   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:29.960341   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:29.982991   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:30.035283   85424 start.go:296] duration metric: took 231.823498ms for postStartSetup
	I1202 19:47:30.035374   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:30.035419   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.070768   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.190107   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:30.196635   85424 fix.go:56] duration metric: took 6.164437606s for fixHost
	I1202 19:47:30.196666   85424 start.go:83] releasing machines lock for "ha-791576-m03", held for 6.164502097s
	I1202 19:47:30.196744   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:30.237763   85424 out.go:179] * Found network options:
	I1202 19:47:30.240640   85424 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:47:30.243436   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243469   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243493   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243503   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:30.243571   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:30.243615   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.243653   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:30.243712   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.273326   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.286780   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.653045   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:30.787771   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:30.787854   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:30.833087   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:30.833158   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:30.833206   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:30.833279   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:30.864249   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:30.889806   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:30.889863   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:30.917840   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:30.984243   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:31.253878   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:31.593901   85424 docker.go:234] disabling docker service ...
	I1202 19:47:31.594010   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:31.621301   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:31.661349   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:32.003626   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:32.391869   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:32.435757   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:32.493110   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:32.493217   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.524849   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:32.524962   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.565517   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.598569   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.641426   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:32.662712   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.677733   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.714192   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.736481   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:32.750823   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:32.766296   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:33.098331   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:49:03.522289   85424 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.42388116s)
	I1202 19:49:03.522317   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:49:03.522385   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:49:03.526524   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:49:03.526585   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:49:03.530326   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:49:03.571925   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:49:03.572010   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.609479   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.650610   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:49:03.653540   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:49:03.656557   85424 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:49:03.659527   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:49:03.677810   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:49:03.681792   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:03.692859   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:49:03.693117   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:03.693363   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:49:03.709753   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:49:03.710031   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.4
	I1202 19:49:03.710040   85424 certs.go:195] generating shared ca certs ...
	I1202 19:49:03.710054   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:49:03.710179   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:49:03.710223   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:49:03.710229   85424 certs.go:257] generating profile certs ...
	I1202 19:49:03.710306   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:49:03.710371   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7aeb3685
	I1202 19:49:03.710427   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:49:03.710436   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:49:03.710521   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:49:03.710542   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:49:03.710554   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:49:03.710565   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:49:03.710577   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:49:03.710598   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:49:03.710610   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:49:03.710662   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:49:03.710695   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:49:03.710703   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:49:03.710730   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:49:03.710755   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:49:03.710778   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:49:03.710822   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:49:03.711042   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:49:03.711071   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:49:03.711083   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:03.711181   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:49:03.728781   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:49:03.830007   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:49:03.833942   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:49:03.842299   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:49:03.846144   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:49:03.854532   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:49:03.857855   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:49:03.866234   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:49:03.870642   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:49:03.879137   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:49:03.883549   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:49:03.893143   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:49:03.896763   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:49:03.904772   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:49:03.925546   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:49:03.951452   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:49:03.975797   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:49:03.998666   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:49:04.023000   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:49:04.042956   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:49:04.061815   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:49:04.081799   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:49:04.113304   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:49:04.131292   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:49:04.149359   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:49:04.163556   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:49:04.177001   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:49:04.191331   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:49:04.204195   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:49:04.216872   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:49:04.229341   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:49:04.242596   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:49:04.248724   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:49:04.256868   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260467   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260531   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.301235   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:49:04.308894   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:49:04.317175   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320635   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320703   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.362642   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:49:04.371073   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:49:04.379233   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383803   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383867   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.425589   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:49:04.433230   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:49:04.436905   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:49:04.478804   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:49:04.521202   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:49:04.562989   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:49:04.603885   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:49:04.644970   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:49:04.686001   85424 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1202 19:49:04.686142   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:49:04.686175   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:49:04.686225   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:49:04.698332   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:49:04.698392   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:49:04.698462   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:49:04.706596   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:49:04.706697   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:49:04.714019   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:49:04.726439   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:49:04.740943   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:49:04.755477   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:49:04.759442   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:04.769254   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:04.889322   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:04.903380   85424 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:49:04.903723   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:04.907146   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:49:04.910053   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:05.053002   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:05.069583   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:49:05.069742   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:49:05.070007   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m03" to be "Ready" ...
	W1202 19:49:07.074081   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:09.574441   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:12.073995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:14.075158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:16.574109   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:19.074269   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:21.573633   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:24.075532   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:26.573178   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:28.573751   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:30.574196   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:33.074433   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:35.574293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:38.074355   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:40.572995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:42.573766   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:44.574193   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:47.074875   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:49.574182   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:52.073848   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:54.074871   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:56.574461   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:59.074135   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:01.075025   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:03.573959   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:05.574229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:08.073434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:10.075308   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:12.573891   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:14.574258   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:17.075768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:19.574491   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:22.073796   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:24.074628   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:26.574014   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:29.073484   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:31.074366   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:33.077573   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:35.574409   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:38.074415   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:40.076462   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:42.573398   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:44.574236   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:47.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:49.574052   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:51.574295   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:53.574395   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:56.074579   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:58.573990   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:00.574496   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:03.074093   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:05.573622   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:07.574521   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:10.074177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:12.074658   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:14.574234   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:17.073779   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:19.074824   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:21.075177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:23.574226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:25.574533   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:28.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:30.573516   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:32.574725   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:35.073690   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:37.073844   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:39.074254   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:41.074445   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:43.574427   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:46.074495   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:48.075157   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:50.574559   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:53.074039   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:55.075518   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:57.574296   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:00.125095   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:02.573158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:04.574068   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:07.074149   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:09.573261   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:11.574325   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:14.074158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:16.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:18.578414   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:21.074856   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:23.573367   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:25.574018   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:28.073545   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:30.074750   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:32.074791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:34.573792   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:37.073884   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:39.074273   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:41.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:43.573239   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:45.574142   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:48.073730   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:50.074154   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:52.074293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:54.074677   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:56.574118   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:58.575322   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:01.074442   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:03.574221   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:06.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:08.573768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:10.574179   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:13.073867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:15.074575   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:17.581482   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:20.075478   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:22.574434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:25.079089   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:27.574074   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:30.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:32.573125   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:34.573275   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:36.573791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:39.075423   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:41.573386   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:43.573426   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:46.074050   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:48.074229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:50.574069   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:53.073917   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:55.573030   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:57.574590   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:00.099899   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:02.573639   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:04.573928   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:06.574012   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:08.574318   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:11.073394   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:13.074011   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:15.074319   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:17.573595   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:19.574170   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:22.074150   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:24.074500   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:26.573647   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:28.573867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:30.574160   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:33.074365   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:35.074585   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:37.574466   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:40.075645   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:42.573981   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:44.574615   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:46.576226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:49.074146   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:51.074479   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:53.574396   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:56.073822   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:58.074332   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:00.115264   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:02.573371   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:05.070625   85424 node_ready.go:55] error getting node "ha-791576-m03" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 19:55:05.070669   85424 node_ready.go:38] duration metric: took 6m0.000641476s for node "ha-791576-m03" to be "Ready" ...
	I1202 19:55:05.073996   85424 out.go:203] 
	W1202 19:55:05.077043   85424 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:55:05.077067   85424 out.go:285] * 
	W1202 19:55:05.079288   85424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:55:05.082165   85424 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.307348407Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.31130981Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.311346765Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171067478Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171146976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176647774Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176731103Z" level=info msg="Adding pod default_busybox-7b57f96db7-l5g8z to CNI network \"kindnet\" (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.190912432Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.191265818Z" level=info msg="Checking pod default_busybox-7b57f96db7-l5g8z for CNI network kindnet (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.194319023Z" level=info msg="Ran pod sandbox e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 with infra container: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195591573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195818661Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195924693Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.198155298Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.21349448Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.105132802Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.109539064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa56a44-9db8-4756-acdc-664a3a83dc98 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.115065486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa5b526-dbea-407e-8ac9-a7b0f9d1c48f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.123684127Z" level=info msg="Creating container: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12386251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.129280857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12990521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.146011626Z" level=info msg="Created container e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.147143527Z" level=info msg="Starting container: e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2" id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.151353561Z" level=info msg="Started container" PID=1519 containerID=e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2 description=default/busybox-7b57f96db7-l5g8z/busybox id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e51c0c263b11d       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   About a minute ago   Running             busybox                   0                   e289b1b32fb87       busybox-7b57f96db7-l5g8z            default
	c74c4f823da84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Running             storage-provisioner       3                   611ff54ac571a       storage-provisioner                 kube-system
	0ca58a409109c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      7 minutes ago        Running             kube-controller-manager   2                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	3335ad39bba28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   785cb0dfb8b28       coredns-66bc5c9577-w2245            kube-system
	406623e1d0127       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   fda4cb2ab460e       coredns-66bc5c9577-hw99j            kube-system
	e3e00e2da8bd7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      7 minutes ago        Running             kindnet-cni               1                   b3c174d7d003c       kindnet-m2l5j                       kube-system
	1ab649bc08ab0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Exited              storage-provisioner       2                   611ff54ac571a       storage-provisioner                 kube-system
	4f18d2c8cbb18       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      7 minutes ago        Running             kube-proxy                1                   5e76fe966d8bb       kube-proxy-q5vfv                    kube-system
	71e9ce78d6466       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70                                      8 minutes ago        Running             kube-vip                  0                   75d9a258d0378       kube-vip-ha-791576                  kube-system
	a18297fd12571       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      8 minutes ago        Running             kube-apiserver            1                   d2e111aee1d35       kube-apiserver-ha-791576            kube-system
	0e19b5bb45d9e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      8 minutes ago        Exited              kube-controller-manager   1                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	392beb226748f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      8 minutes ago        Running             etcd                      1                   d6f57a5f40b96       etcd-ha-791576                      kube-system
	a038e721d900d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      8 minutes ago        Running             kube-scheduler            1                   6a36f33b4c7e9       kube-scheduler-ha-791576            kube-system
	
	
	==> coredns [3335ad39bba28fdd293923b313dec13f1a33d55117eaf80083a781dff0d8bdea] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42808 - 26955 "HINFO IN 630864626443792637.4045400913318639804. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02501392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [406623e1d012777bc4fd0347ac8b3f005c55afa441ea4b81863c6c008ee30979] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47637 - 44081 "HINFO IN 8875301780668194042.4808208815551959978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019656625s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:55:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 7m44s                kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                12m                  kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           11m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           8m43s                node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 8m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m8s (x8 over 8m8s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m8s (x8 over 8m8s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m8s (x8 over 8m8s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m40s                node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           7m9s                 node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:55:01 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:55:01 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:55:01 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:55:01 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m25s                  kube-proxy       
	  Normal   Starting                 8m34s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Warning  CgroupV1                 9m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m17s (x8 over 9m18s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m17s (x8 over 9m18s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m17s (x8 over 9m18s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m43s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 8m4s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m4s (x8 over 8m4s)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m4s (x8 over 8m4s)    kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m4s (x8 over 8m4s)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m40s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           7m9s                   node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_43_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:43:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:46:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 02 Dec 2025 19:46:10 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 02 Dec 2025 19:46:10 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 02 Dec 2025 19:46:10 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 02 Dec 2025 19:46:10 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-791576-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                55822733-4606-4949-b6de-fc211d66e023
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xjn7v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     busybox-7b57f96db7-zjghb                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-791576-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-2pf27                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-791576-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-791576-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dvt58                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-791576-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-791576-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  RegisteredNode  8m43s  node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  RegisteredNode  7m40s  node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  RegisteredNode  7m9s   node-controller  Node ha-791576-m03 event: Registered Node ha-791576-m03 in Controller
	  Normal  NodeNotReady    6m49s  node-controller  Node ha-791576-m03 status is now: NodeNotReady
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:46:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8zbzj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-4tffm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                9m55s              kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m43s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m40s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m9s               node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             6m50s              node-controller  Node ha-791576-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de] <==
	{"level":"warn","ts":"2025-12-02T19:48:49.408714Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44eee1400a9a95d4","rtt":"97.176058ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:49.408730Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44eee1400a9a95d4","rtt":"79.383358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:49.571750Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:49.571804Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:53.573107Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:53.573158Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:54.409056Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44eee1400a9a95d4","rtt":"97.176058ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:54.409042Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44eee1400a9a95d4","rtt":"79.383358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:57.574056Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:57.574122Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:59.409580Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44eee1400a9a95d4","rtt":"97.176058ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:48:59.409569Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44eee1400a9a95d4","rtt":"79.383358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:01.575708Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:01.575758Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:04.410022Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44eee1400a9a95d4","rtt":"79.383358ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:04.410355Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44eee1400a9a95d4","rtt":"97.176058ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:05.577552Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-02T19:49:05.577612Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"44eee1400a9a95d4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-02T19:49:06.780015Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"44eee1400a9a95d4","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-02T19:49:06.780161Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.780208Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.809456Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"44eee1400a9a95d4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-02T19:49:06.809499Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877742Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877902Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	
	
	==> kernel <==
	 19:55:06 up  1:37,  0 user,  load average: 1.05, 1.30, 1.23
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3e00e2da8bd7227823a5aa7d6e5e4ac4d0b3b6254164b8f98c55f9fe1e0a41f] <==
	I1202 19:54:32.302087       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:54:42.312418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:54:42.312537       1 main.go:301] handling current node
	I1202 19:54:42.312578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:54:42.312614       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:54:42.312844       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:54:42.312918       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:42.313067       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:42.313115       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:54:52.295209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:54:52.295374       1 main.go:301] handling current node
	I1202 19:54:52.295398       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:54:52.295406       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:54:52.295573       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:54:52.295595       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:52.295666       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:52.295678       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:02.294739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:55:02.294851       1 main.go:301] handling current node
	I1202 19:55:02.294904       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:55:02.294921       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:55:02.295068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:55:02.295112       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:55:02.295205       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:55:02.295219       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9] <==
	I1202 19:47:20.864636       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 19:47:20.866204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 19:47:20.866305       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 19:47:20.885386       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 19:47:20.888287       1 aggregator.go:171] initial CRD sync complete...
	I1202 19:47:20.888313       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 19:47:20.888321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 19:47:20.888328       1 cache.go:39] Caches are synced for autoregister controller
	I1202 19:47:20.889012       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 19:47:20.889287       1 cache.go:39] Caches are synced for LocalAvailability controller
	W1202 19:47:20.904276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1202 19:47:20.906209       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 19:47:20.916565       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1202 19:47:20.920922       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1202 19:47:20.934648       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 19:47:20.959819       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 19:47:20.961820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 19:47:20.967200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 19:47:20.968399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 19:47:21.496374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 19:47:21.505479       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1202 19:47:22.134237       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:47:27.016568       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:47:27.032930       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:47:27.190342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0ca58a409109c6cf93ecd9eb064e7f3091b3dd592f95be9877036c0d2bbfeb8d] <==
	I1202 19:47:26.655080       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m03"
	I1202 19:47:26.656058       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m04"
	I1202 19:47:26.656190       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576"
	I1202 19:47:26.656300       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m02"
	I1202 19:47:26.656402       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 19:47:26.660500       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 19:47:26.660768       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 19:47:26.663628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 19:47:26.674684       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 19:47:26.651025       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 19:47:26.678311       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 19:47:26.697935       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 19:47:26.687406       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 19:47:26.700695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:47:26.687552       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 19:47:26.687521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 19:47:26.687960       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 19:47:26.687543       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 19:47:26.816935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835171       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835200       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 19:47:26.835207       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 19:47:31.218533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-791576-m04"
	I1202 19:53:17.454482       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:53:17.454338       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-xjn7v"
	
	
	==> kube-controller-manager [0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44] <==
	I1202 19:47:00.439112       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:47:01.716830       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:47:01.716879       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:01.723350       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:47:01.723493       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:47:01.723581       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 19:47:01.723592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:21.737115       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [4f18d2c8cbb18519eff20cb6efdd106364f8f81f655e7d0e55cb89f551d5ed2f] <==
	I1202 19:47:22.149595       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:47:22.267082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:47:22.368012       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:47:22.368108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:47:22.368213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:47:22.406247       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:47:22.406301       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:47:22.411880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:47:22.412231       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:47:22.412424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:22.415564       1 config.go:200] "Starting service config controller"
	I1202 19:47:22.415619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:47:22.415683       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:47:22.415727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:47:22.415771       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:47:22.415809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:47:22.419448       1 config.go:309] "Starting node config controller"
	I1202 19:47:22.419524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:47:22.419556       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:47:22.515835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:47:22.515918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:47:22.515839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce] <==
	I1202 19:47:20.728119       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 19:47:20.728161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:20.745935       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.746041       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.747549       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:47:20.749738       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:47:20.809565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:47:20.809638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:47:20.809646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:47:20.809729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:47:20.809794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:47:20.809851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:47:20.809908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 19:47:20.809962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:47:20.810015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:47:20.810111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 19:47:20.810263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:47:20.810386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:47:20.813845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:47:20.813926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:47:20.813972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:47:20.814068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1202 19:47:20.946679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:47:21 ha-791576 kubelet[793]: E1202 19:47:21.014395     793 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-791576\" already exists" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.422624     793 apiserver.go:52] "Watching apiserver"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.426135     793 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-791576" podUID="1848798a-e3e5-49f2-a138-7a169024e0bd"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.440465     793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449304     793 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449333     793 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494517     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-xtables-lock\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494710     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a2e34ca-2f88-457c-8898-9cfbab53ca55-tmp\") pod \"storage-provisioner\" (UID: \"7a2e34ca-2f88-457c-8898-9cfbab53ca55\") " pod="kube-system/storage-provisioner"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494792     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-lib-modules\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494985     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-xtables-lock\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495131     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-lib-modules\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495164     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-cni-cfg\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.517177     793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.605158     793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-791576" podStartSLOduration=0.605139116 podStartE2EDuration="605.139116ms" podCreationTimestamp="2025-12-02 19:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 19:47:21.587118107 +0000 UTC m=+23.288113583" watchObservedRunningTime="2025-12-02 19:47:21.605139116 +0000 UTC m=+23.306134592"
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.780422     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681 WatchSource:0}: Error finding container 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681: Status 404 returned error can't find the container with id 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.810847     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d WatchSource:0}: Error finding container b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d: Status 404 returned error can't find the container with id b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.882017     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e WatchSource:0}: Error finding container fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e: Status 404 returned error can't find the container with id fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.507633     793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1a35affb644c5a6d9375f3959ef470" path="/var/lib/kubelet/pods/de1a35affb644c5a6d9375f3959ef470/volumes"
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.594773     793 scope.go:117] "RemoveContainer" containerID="0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	Dec 02 19:47:52 ha-791576 kubelet[793]: I1202 19:47:52.698574     793 scope.go:117] "RemoveContainer" containerID="1ab649bc08ab060742673f50eeb7c2a57ee5a4578e1a59eddd554c3ad6d7404e"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.432933     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.432988     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.433641     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.433699     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist"
	Dec 02 19:53:17 ha-791576 kubelet[793]: I1202 19:53:17.758545     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhfw\" (UniqueName: \"kubernetes.io/projected/9231dae8-fa3f-4719-aa0b-e2893cf7afe6-kube-api-access-gdhfw\") pod \"busybox-7b57f96db7-l5g8z\" (UID: \"9231dae8-fa3f-4719-aa0b-e2893cf7afe6\") " pod="default/busybox-7b57f96db7-l5g8z"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-k9bh8
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8
helpers_test.go:290: (dbg) kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-k9bh8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fp5lt (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fp5lt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  110s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  110s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (528.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 node delete m03 --alsologtostderr -v 5: (5.981068534s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: exit status 7 (589.96754ms)

                                                
                                                
-- stdout --
	ha-791576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791576-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791576-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:55:13.954509   91575 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:13.954710   91575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:13.954743   91575 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:13.954762   91575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:13.955035   91575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:13.955239   91575 out.go:368] Setting JSON to false
	I1202 19:55:13.955297   91575 mustload.go:66] Loading cluster: ha-791576
	I1202 19:55:13.955414   91575 notify.go:221] Checking for updates...
	I1202 19:55:13.955828   91575 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:13.955867   91575 status.go:174] checking status of ha-791576 ...
	I1202 19:55:13.956727   91575 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:13.986532   91575 status.go:371] ha-791576 host status = "Running" (err=<nil>)
	I1202 19:55:13.986604   91575 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:55:13.987003   91575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:14.007641   91575 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:55:14.007956   91575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:14.007998   91575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:14.033083   91575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:14.147371   91575 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:14.153861   91575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:55:14.166881   91575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:14.224758   91575 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 19:55:14.215348816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:14.225352   91575 kubeconfig.go:125] found "ha-791576" server: "https://192.168.49.254:8443"
	I1202 19:55:14.225387   91575 api_server.go:166] Checking apiserver status ...
	I1202 19:55:14.225434   91575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:55:14.236504   91575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/949/cgroup
	I1202 19:55:14.244348   91575 api_server.go:182] apiserver freezer: "9:freezer:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio/crio-a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9"
	I1202 19:55:14.244462   91575 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio/crio-a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9/freezer.state
	I1202 19:55:14.251933   91575 api_server.go:204] freezer state: "THAWED"
	I1202 19:55:14.251960   91575 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 19:55:14.260181   91575 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 19:55:14.260214   91575 status.go:463] ha-791576 apiserver status = Running (err=<nil>)
	I1202 19:55:14.260226   91575 status.go:176] ha-791576 status: &{Name:ha-791576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:55:14.260242   91575 status.go:174] checking status of ha-791576-m02 ...
	I1202 19:55:14.260552   91575 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:14.279539   91575 status.go:371] ha-791576-m02 host status = "Running" (err=<nil>)
	I1202 19:55:14.279562   91575 host.go:66] Checking if "ha-791576-m02" exists ...
	I1202 19:55:14.279851   91575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:14.296798   91575 host.go:66] Checking if "ha-791576-m02" exists ...
	I1202 19:55:14.297135   91575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:14.297176   91575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:14.316078   91575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:14.418784   91575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:55:14.431672   91575 kubeconfig.go:125] found "ha-791576" server: "https://192.168.49.254:8443"
	I1202 19:55:14.431697   91575 api_server.go:166] Checking apiserver status ...
	I1202 19:55:14.431738   91575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:55:14.442783   91575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I1202 19:55:14.451111   91575 api_server.go:182] apiserver freezer: "9:freezer:/docker/9f4bcf7e2219f1b8a99053bd270081770481bf560ee51e9ea0bdb611f70faecd/crio/crio-ce48301ee1cc8ca842dc997b34090c3e38adf71ccbf6aaede509b865c87920eb"
	I1202 19:55:14.451237   91575 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9f4bcf7e2219f1b8a99053bd270081770481bf560ee51e9ea0bdb611f70faecd/crio/crio-ce48301ee1cc8ca842dc997b34090c3e38adf71ccbf6aaede509b865c87920eb/freezer.state
	I1202 19:55:14.459805   91575 api_server.go:204] freezer state: "THAWED"
	I1202 19:55:14.459843   91575 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 19:55:14.467965   91575 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 19:55:14.468002   91575 status.go:463] ha-791576-m02 apiserver status = Running (err=<nil>)
	I1202 19:55:14.468027   91575 status.go:176] ha-791576-m02 status: &{Name:ha-791576-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:55:14.468049   91575 status.go:174] checking status of ha-791576-m04 ...
	I1202 19:55:14.468365   91575 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:55:14.485636   91575 status.go:371] ha-791576-m04 host status = "Stopped" (err=<nil>)
	I1202 19:55:14.485705   91575 status.go:384] host is not running, skipping remaining checks
	I1202 19:55:14.485713   91575 status.go:176] ha-791576-m04 status: &{Name:ha-791576-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 85549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:46:51.358682133Z",
	            "FinishedAt": "2025-12-02T19:46:50.744519975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d42040ea74c4eeedb7f84e603f4c2848e2cd3d94b7edd53b3686d82839a44349",
	            "SandboxKey": "/var/run/docker/netns/d42040ea74c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f0:35:b9:8a:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "0de808d6cef38a4c373fb171d1e5a929c71554ad4cf487786793c13d6a707020",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.338667213s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:46:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:46:51.075692   85424 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:46:51.075825   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.075836   85424 out.go:374] Setting ErrFile to fd 2...
	I1202 19:46:51.075841   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.076149   85424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:46:51.076551   85424 out.go:368] Setting JSON to false
	I1202 19:46:51.077367   85424 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5349,"bootTime":1764699462,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:46:51.077442   85424 start.go:143] virtualization:  
	I1202 19:46:51.082662   85424 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:46:51.085642   85424 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:46:51.085706   85424 notify.go:221] Checking for updates...
	I1202 19:46:51.091665   85424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:46:51.094539   85424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:51.097403   85424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:46:51.100336   85424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:46:51.103289   85424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:46:51.106849   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:51.106965   85424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:46:51.138890   85424 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:46:51.139003   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.198061   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.188947665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.198169   85424 docker.go:319] overlay module found
	I1202 19:46:51.201303   85424 out.go:179] * Using the docker driver based on existing profile
	I1202 19:46:51.204063   85424 start.go:309] selected driver: docker
	I1202 19:46:51.204087   85424 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.204223   85424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:46:51.204328   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.266558   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.256321599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.266979   85424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:46:51.267013   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:51.267084   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:51.267148   85424 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.272255   85424 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:46:51.275067   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:51.277961   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:51.280789   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:51.280839   85424 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:46:51.280871   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:51.280873   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:51.280964   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:51.280974   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:51.281126   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.300000   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:51.300023   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:51.300050   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:51.300081   85424 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:51.300153   85424 start.go:364] duration metric: took 46.004µs to acquireMachinesLock for "ha-791576"
	I1202 19:46:51.300175   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:51.300183   85424 fix.go:54] fixHost starting: 
	I1202 19:46:51.300454   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.316816   85424 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:46:51.316845   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:51.320143   85424 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:46:51.320230   85424 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:46:51.575902   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.594134   85424 kic.go:430] container "ha-791576" state is running.
	I1202 19:46:51.594514   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:51.619517   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.619754   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:51.619817   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:51.639059   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:51.639374   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:51.639778   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:51.641510   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:32813: read: connection reset by peer
	I1202 19:46:54.791183   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.791204   85424 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:46:54.791275   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.809134   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.809441   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.809458   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:46:54.966477   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.966565   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.984050   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.984375   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.984402   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:46:55.137902   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:46:55.137928   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:46:55.138006   85424 ubuntu.go:190] setting up certificates
	I1202 19:46:55.138016   85424 provision.go:84] configureAuth start
	I1202 19:46:55.138084   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:55.155651   85424 provision.go:143] copyHostCerts
	I1202 19:46:55.155701   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155740   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:46:55.155758   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155836   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:46:55.155925   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155955   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:46:55.155965   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155993   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:46:55.156051   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156071   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:46:55.156082   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156108   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:46:55.156162   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:46:55.641637   85424 provision.go:177] copyRemoteCerts
	I1202 19:46:55.641717   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:46:55.641763   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.660498   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:55.765103   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:46:55.765169   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:46:55.782097   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:46:55.782154   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:46:55.798837   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:46:55.798898   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:46:55.816023   85424 provision.go:87] duration metric: took 677.979406ms to configureAuth
	I1202 19:46:55.816052   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:46:55.816326   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:55.816455   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.833499   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:55.833854   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:55.833876   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:46:56.249298   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:46:56.249319   85424 machine.go:97] duration metric: took 4.629549894s to provisionDockerMachine
	I1202 19:46:56.249331   85424 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:46:56.249341   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:46:56.249400   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:46:56.249454   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.268549   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.373420   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:46:56.376533   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:46:56.376562   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:46:56.376586   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:46:56.376642   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:46:56.376760   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:46:56.376771   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:46:56.376874   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:46:56.383745   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:56.400262   85424 start.go:296] duration metric: took 150.916843ms for postStartSetup
	I1202 19:46:56.400381   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:46:56.400460   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.420055   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.522566   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:46:56.527172   85424 fix.go:56] duration metric: took 5.226983089s for fixHost
	I1202 19:46:56.527198   85424 start.go:83] releasing machines lock for "ha-791576", held for 5.227032622s
	I1202 19:46:56.527261   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:56.543387   85424 ssh_runner.go:195] Run: cat /version.json
	I1202 19:46:56.543430   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:46:56.543494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.543434   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.561404   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.561708   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.749544   85424 ssh_runner.go:195] Run: systemctl --version
	I1202 19:46:56.755696   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:46:56.790499   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:46:56.794459   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:46:56.794568   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:46:56.801919   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:46:56.801941   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:46:56.801971   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:46:56.802028   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:46:56.816910   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:46:56.829587   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:46:56.829715   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:46:56.844766   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:46:56.857092   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:46:56.975356   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:46:57.091555   85424 docker.go:234] disabling docker service ...
	I1202 19:46:57.091665   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:46:57.106660   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:46:57.120539   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:46:57.239669   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:46:57.366517   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:46:57.382471   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:46:57.396694   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:46:57.396813   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.405941   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:46:57.406053   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.415370   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.424417   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.433387   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:46:57.442311   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.451228   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.459398   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.468002   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:46:57.475168   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:46:57.482408   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:57.597548   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:46:57.804313   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:46:57.804451   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:46:57.808320   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:46:57.808445   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:46:57.812025   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:46:57.839390   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:46:57.839543   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.867354   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.901354   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:46:57.904220   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:46:57.920051   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:46:57.923689   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:57.933012   85424 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:46:57.933164   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:57.933217   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.967565   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.967590   85424 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:46:57.967641   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.994848   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.994872   85424 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:46:57.994881   85424 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:46:57.994976   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:46:57.995055   85424 ssh_runner.go:195] Run: crio config
	I1202 19:46:58.061390   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:58.061418   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:58.061446   85424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:46:58.061470   85424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:46:58.061604   85424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:46:58.061624   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:46:58.061690   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:46:58.074421   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:58.074559   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:46:58.074648   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:46:58.083182   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:46:58.083291   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:46:58.091465   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:46:58.104313   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:46:58.118107   85424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:46:58.130768   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:46:58.143041   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:46:58.146530   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:58.155934   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:58.272546   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:46:58.287479   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:46:58.287498   85424 certs.go:195] generating shared ca certs ...
	I1202 19:46:58.287513   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.287678   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:46:58.287718   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:46:58.287725   85424 certs.go:257] generating profile certs ...
	I1202 19:46:58.287810   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:46:58.287835   85424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad
	I1202 19:46:58.287850   85424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1202 19:46:58.432480   85424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad ...
	I1202 19:46:58.432627   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad: {Name:mkc49591a089fa34cc904adb89cfa288cc2b970e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.432873   85424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad ...
	I1202 19:46:58.432910   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad: {Name:mk0be3cbf6db1780ac4ac275259d854f38f2158a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.433068   85424 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:46:58.433251   85424 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:46:58.433443   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:46:58.433477   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:46:58.433511   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:46:58.433556   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:46:58.433591   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:46:58.433624   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:46:58.433685   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:46:58.433721   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:46:58.433750   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:46:58.433833   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:46:58.433893   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:46:58.433920   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:46:58.433994   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:46:58.434052   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:46:58.434132   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:46:58.434225   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:58.434290   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.434337   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.434370   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.443939   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:46:58.463785   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:46:58.486458   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:46:58.508445   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:46:58.530317   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:46:58.548462   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:46:58.568358   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:46:58.586970   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:46:58.604714   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:46:58.627145   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:46:58.645042   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:46:58.663909   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:46:58.676006   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:46:58.681961   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:46:58.689749   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693060   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693152   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.735524   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:46:58.745065   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:46:58.754338   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759068   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759143   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.803928   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:46:58.811507   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:46:58.819506   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823153   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823249   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.865967   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:46:58.874198   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:46:58.878028   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:46:58.919236   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:46:58.961187   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:46:59.007842   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:46:59.061600   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:46:59.127987   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:46:59.207795   85424 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:59.207925   85424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:46:59.207988   85424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:46:59.265803   85424 cri.go:89] found id: "71e9ce78d64661ac6d00283cdb79e431fdb65c5c2f57fa8aaa18d21677420d38"
	I1202 19:46:59.265827   85424 cri.go:89] found id: "a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9"
	I1202 19:46:59.265833   85424 cri.go:89] found id: "0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	I1202 19:46:59.265836   85424 cri.go:89] found id: "392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de"
	I1202 19:46:59.265840   85424 cri.go:89] found id: "a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce"
	I1202 19:46:59.265843   85424 cri.go:89] found id: ""
	I1202 19:46:59.265890   85424 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:46:59.290356   85424 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:46:59Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:46:59.290428   85424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:46:59.301612   85424 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:46:59.301633   85424 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:46:59.301705   85424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:46:59.310893   85424 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:59.311284   85424 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.311384   85424 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:46:59.311696   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.312205   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:46:59.312709   85424 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:46:59.312741   85424 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:46:59.312748   85424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:46:59.312753   85424 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:46:59.312758   85424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:46:59.313075   85424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:46:59.313166   85424 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:46:59.323603   85424 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:46:59.323629   85424 kubeadm.go:602] duration metric: took 21.981794ms to restartPrimaryControlPlane
	I1202 19:46:59.323638   85424 kubeadm.go:403] duration metric: took 115.854562ms to StartCluster
	I1202 19:46:59.323653   85424 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.323714   85424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.324315   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.324515   85424 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:46:59.324543   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:46:59.324556   85424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:46:59.325058   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.330563   85424 out.go:179] * Enabled addons: 
	I1202 19:46:59.333607   85424 addons.go:530] duration metric: took 9.049214ms for enable addons: enabled=[]
	I1202 19:46:59.333674   85424 start.go:247] waiting for cluster config update ...
	I1202 19:46:59.333687   85424 start.go:256] writing updated cluster config ...
	I1202 19:46:59.337224   85424 out.go:203] 
	I1202 19:46:59.340497   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.340616   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.343973   85424 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:46:59.346800   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:59.349828   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:59.352721   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:59.352753   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:59.352862   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:59.352879   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:59.353002   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.353206   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:59.379004   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:59.379030   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:59.379043   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:59.379066   85424 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:59.379121   85424 start.go:364] duration metric: took 35.265µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:46:59.379145   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:59.379150   85424 fix.go:54] fixHost starting: m02
	I1202 19:46:59.379415   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.419284   85424 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:46:59.419317   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:59.422504   85424 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:46:59.422616   85424 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:46:59.837868   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.874389   85424 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:46:59.874756   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:46:59.901234   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.901470   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:59.901529   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:46:59.939434   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:59.939741   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:46:59.939756   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:59.941956   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:47:03.181981   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.182010   85424 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:47:03.182083   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.211290   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.211596   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.211614   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:47:03.424005   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.424078   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.477630   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.477958   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.477977   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:03.677990   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:03.678027   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:03.678048   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:03.678060   85424 provision.go:84] configureAuth start
	I1202 19:47:03.678128   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:03.701231   85424 provision.go:143] copyHostCerts
	I1202 19:47:03.701274   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701304   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:03.701318   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701396   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:03.701478   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701500   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:03.701510   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701537   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:03.701637   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701668   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:03.701674   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701705   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:03.701761   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:47:03.945165   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:03.945235   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:03.945280   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.975366   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.102132   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:04.102208   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:04.134543   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:04.134604   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:04.161226   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:04.161297   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:04.192644   85424 provision.go:87] duration metric: took 514.571013ms to configureAuth
	I1202 19:47:04.192676   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:04.192912   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:04.193014   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.219315   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:04.219619   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:04.219638   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:04.675291   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:04.675356   85424 machine.go:97] duration metric: took 4.773873492s to provisionDockerMachine
	I1202 19:47:04.675373   85424 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:47:04.675386   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:04.675452   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:04.675498   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.694108   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.797554   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:04.800903   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:04.800934   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:04.800945   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:04.801002   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:04.801077   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:04.801089   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:04.801185   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:04.808567   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:04.826419   85424 start.go:296] duration metric: took 151.029848ms for postStartSetup
	I1202 19:47:04.826519   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:04.826573   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.843360   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.943115   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:04.948188   85424 fix.go:56] duration metric: took 5.569031295s for fixHost
	I1202 19:47:04.948214   85424 start.go:83] releasing machines lock for "ha-791576-m02", held for 5.56907917s
	I1202 19:47:04.948279   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:04.970572   85424 out.go:179] * Found network options:
	I1202 19:47:04.973538   85424 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:47:04.976397   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:04.976445   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:04.976513   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:04.976562   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.976885   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:04.976937   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.998993   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.000433   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.146894   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:05.207886   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:05.207960   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:05.215827   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:05.215855   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:05.215923   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:05.215992   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:05.231545   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:05.245040   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:05.245102   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:05.260499   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:05.273511   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:05.399821   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:05.547719   85424 docker.go:234] disabling docker service ...
	I1202 19:47:05.547833   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:05.574826   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:05.600862   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:05.835995   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:06.044894   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:06.061431   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:06.092815   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:06.092932   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.102629   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:06.102737   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.112408   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.122046   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.131510   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:06.140127   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.149293   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.162481   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.173417   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:06.181633   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:06.189368   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:06.407349   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:47:06.656582   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:47:06.656693   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:47:06.660537   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:47:06.660607   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:47:06.664156   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:47:06.693772   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:47:06.693853   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.722024   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.754035   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:47:06.757007   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:47:06.759990   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:47:06.777500   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:47:06.781343   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:06.791187   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:47:06.791444   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:06.791707   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:47:06.808279   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:47:06.808561   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:47:06.808576   85424 certs.go:195] generating shared ca certs ...
	I1202 19:47:06.808596   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:47:06.808787   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:47:06.808843   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:47:06.808854   85424 certs.go:257] generating profile certs ...
	I1202 19:47:06.808932   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:47:06.808997   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7b209479
	I1202 19:47:06.809041   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:47:06.809055   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:47:06.809070   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:47:06.809087   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:47:06.809100   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:47:06.809110   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:47:06.809124   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:47:06.809139   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:47:06.809152   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:47:06.809203   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:47:06.809238   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:47:06.809249   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:47:06.809275   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:47:06.809305   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:47:06.809331   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:47:06.809375   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:06.809409   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:06.809426   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:47:06.809437   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:47:06.809494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:47:06.826818   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:47:06.926038   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:47:06.930094   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:47:06.938514   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:47:06.942246   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:47:06.951163   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:47:06.954843   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:47:06.962999   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:47:06.966675   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:47:06.975178   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:47:06.978885   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:47:06.987509   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:47:06.990939   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:47:06.999005   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:47:07.017141   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:47:07.034232   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:47:07.052223   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:47:07.068874   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:47:07.085118   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:47:07.102568   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:47:07.119624   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:47:07.137149   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:47:07.155661   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:47:07.174795   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:47:07.191770   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:47:07.204561   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:47:07.217443   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:47:07.230339   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:47:07.242695   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:47:07.255417   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:47:07.267762   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:47:07.280304   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:47:07.286551   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:47:07.294800   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298454   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298514   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.338926   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:47:07.346584   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:47:07.354270   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358006   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358069   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.398667   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:47:07.406676   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:47:07.414843   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419161   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419247   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.460207   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:47:07.467798   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:47:07.471321   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:47:07.514285   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:47:07.561278   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:47:07.603224   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:47:07.644697   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:47:07.686079   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:47:07.727346   85424 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:47:07.727470   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:47:07.727522   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:47:07.727601   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:47:07.740480   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:47:07.740546   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:47:07.740622   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:47:07.748776   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:47:07.748850   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:47:07.756859   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:47:07.770007   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:47:07.782397   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:47:07.795978   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:47:07.799804   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:07.808809   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:07.936978   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:07.950174   85424 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:47:07.950576   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:07.954257   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:47:07.957286   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:08.088938   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:08.104389   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:47:08.104523   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:47:08.104787   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	W1202 19:47:18.106667   85424 node_ready.go:55] error getting node "ha-791576-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-791576-m02": net/http: TLS handshake timeout
	I1202 19:47:20.815620   85424 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:47:20.815646   85424 node_ready.go:38] duration metric: took 12.710819831s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:47:20.815659   85424 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:47:20.815715   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.316644   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.816110   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.315948   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.815840   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.316118   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.815903   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.838753   85424 api_server.go:72] duration metric: took 15.888533132s to wait for apiserver process to appear ...
	I1202 19:47:23.838776   85424 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:47:23.838807   85424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:47:23.866609   85424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:47:23.870765   85424 api_server.go:141] control plane version: v1.34.2
	I1202 19:47:23.870793   85424 api_server.go:131] duration metric: took 32.004959ms to wait for apiserver health ...
	I1202 19:47:23.870804   85424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:47:23.889009   85424 system_pods.go:59] 26 kube-system pods found
	I1202 19:47:23.889120   85424 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889176   85424 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889202   85424 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.889222   85424 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.889255   85424 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.889279   85424 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.889300   85424 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.889339   85424 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.889361   85424 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.889396   85424 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889439   85424 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889463   85424 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.889517   85424 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889553   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889589   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.889612   85424 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.889629   85424 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.889649   85424 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.889703   85424 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.889730   85424 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.889767   85424 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.889789   85424 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.889813   85424 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.889853   85424 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.889881   85424 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.889945   85424 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.889982   85424 system_pods.go:74] duration metric: took 19.17073ms to wait for pod list to return data ...
	I1202 19:47:23.890015   85424 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:47:23.903242   85424 default_sa.go:45] found service account: "default"
	I1202 19:47:23.903345   85424 default_sa.go:55] duration metric: took 13.295846ms for default service account to be created ...
	I1202 19:47:23.903390   85424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:47:23.918952   85424 system_pods.go:86] 26 kube-system pods found
	I1202 19:47:23.919047   85424 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919079   85424 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919121   85424 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.919147   85424 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.919165   85424 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.919210   85424 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.919234   85424 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.919257   85424 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.919293   85424 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.919328   85424 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919349   85424 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919407   85424 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.919452   85424 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919498   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919527   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.919571   85424 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.919594   85424 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.919611   85424 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.919658   85424 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.919681   85424 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.919700   85424 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.919737   85424 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.919770   85424 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.919789   85424 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.919824   85424 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.919853   85424 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.919880   85424 system_pods.go:126] duration metric: took 16.439891ms to wait for k8s-apps to be running ...
	I1202 19:47:23.919920   85424 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:47:23.920039   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:47:23.943430   85424 system_svc.go:56] duration metric: took 23.498391ms WaitForService to wait for kubelet
	I1202 19:47:23.943548   85424 kubeadm.go:587] duration metric: took 15.993331779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:47:23.943620   85424 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:47:23.963377   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963414   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963434   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963440   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963444   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963448   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963453   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963456   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963461   85424 node_conditions.go:105] duration metric: took 19.808046ms to run NodePressure ...
	I1202 19:47:23.963474   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:47:23.963497   85424 start.go:256] writing updated cluster config ...
	I1202 19:47:23.966956   85424 out.go:203] 
	I1202 19:47:23.970081   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:23.970200   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:23.973545   85424 out.go:179] * Starting "ha-791576-m03" control-plane node in "ha-791576" cluster
	I1202 19:47:23.977222   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:47:23.980067   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:47:23.982893   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:47:23.982917   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:47:23.982945   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:47:23.983271   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:47:23.983306   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:47:23.983500   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.032012   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:47:24.032039   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:47:24.032056   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:47:24.032084   85424 start.go:360] acquireMachinesLock for ha-791576-m03: {Name:mke11e8197b1eb1f85f8abb689432afa86afcde6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:47:24.032155   85424 start.go:364] duration metric: took 54.948µs to acquireMachinesLock for "ha-791576-m03"
	I1202 19:47:24.032184   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:47:24.032191   85424 fix.go:54] fixHost starting: m03
	I1202 19:47:24.032519   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.061731   85424 fix.go:112] recreateIfNeeded on ha-791576-m03: state=Stopped err=<nil>
	W1202 19:47:24.061757   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:47:24.064925   85424 out.go:252] * Restarting existing docker container for "ha-791576-m03" ...
	I1202 19:47:24.065009   85424 cli_runner.go:164] Run: docker start ha-791576-m03
	I1202 19:47:24.481554   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.511641   85424 kic.go:430] container "ha-791576-m03" state is running.
	I1202 19:47:24.512003   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:24.552004   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.552243   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:47:24.552303   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:24.583210   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:24.583581   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:24.583591   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:47:24.584229   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47380->127.0.0.1:32823: read: connection reset by peer
	I1202 19:47:27.831905   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:27.832023   85424 ubuntu.go:182] provisioning hostname "ha-791576-m03"
	I1202 19:47:27.832106   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:27.866228   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:27.866528   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:27.866538   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m03 && echo "ha-791576-m03" | sudo tee /etc/hostname
	I1202 19:47:28.206271   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:28.206429   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.235744   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:28.236058   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:28.236081   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:28.537696   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:28.537727   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:28.537745   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:28.537786   85424 provision.go:84] configureAuth start
	I1202 19:47:28.537865   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:28.575346   85424 provision.go:143] copyHostCerts
	I1202 19:47:28.575393   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575433   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:28.575445   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575528   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:28.575619   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575644   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:28.575649   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575682   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:28.575735   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575759   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:28.575763   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575791   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:28.575848   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m03 san=[127.0.0.1 192.168.49.4 ha-791576-m03 localhost minikube]
	I1202 19:47:28.737231   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:28.737301   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:28.737343   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.767082   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:28.894686   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:28.894758   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:28.937222   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:28.937295   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:29.025224   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:29.025298   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:29.085079   85424 provision.go:87] duration metric: took 547.273818ms to configureAuth
	I1202 19:47:29.085116   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:29.085371   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:29.085483   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.111990   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:29.112296   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:29.112318   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:29.803395   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:29.803431   85424 machine.go:97] duration metric: took 5.251179236s to provisionDockerMachine
	I1202 19:47:29.803442   85424 start.go:293] postStartSetup for "ha-791576-m03" (driver="docker")
	I1202 19:47:29.803453   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:29.803521   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:29.803574   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.833575   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:29.954416   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:29.960020   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:29.960062   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:29.960082   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:29.960151   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:29.960229   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:29.960240   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:29.960341   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:29.982991   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:30.035283   85424 start.go:296] duration metric: took 231.823498ms for postStartSetup
	I1202 19:47:30.035374   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:30.035419   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.070768   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.190107   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:30.196635   85424 fix.go:56] duration metric: took 6.164437606s for fixHost
	I1202 19:47:30.196666   85424 start.go:83] releasing machines lock for "ha-791576-m03", held for 6.164502097s
	I1202 19:47:30.196744   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:30.237763   85424 out.go:179] * Found network options:
	I1202 19:47:30.240640   85424 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:47:30.243436   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243469   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243493   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243503   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:30.243571   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:30.243615   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.243653   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:30.243712   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.273326   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.286780   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.653045   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:30.787771   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:30.787854   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:30.833087   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:30.833158   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:30.833206   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:30.833279   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:30.864249   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:30.889806   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:30.889863   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:30.917840   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:30.984243   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:31.253878   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:31.593901   85424 docker.go:234] disabling docker service ...
	I1202 19:47:31.594010   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:31.621301   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:31.661349   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:32.003626   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:32.391869   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:32.435757   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:32.493110   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:32.493217   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.524849   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:32.524962   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.565517   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.598569   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.641426   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:32.662712   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.677733   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.714192   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.736481   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:32.750823   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:32.766296   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:33.098331   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:49:03.522289   85424 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.42388116s)
	I1202 19:49:03.522317   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:49:03.522385   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:49:03.526524   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:49:03.526585   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:49:03.530326   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:49:03.571925   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:49:03.572010   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.609479   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.650610   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:49:03.653540   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:49:03.656557   85424 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:49:03.659527   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:49:03.677810   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:49:03.681792   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:03.692859   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:49:03.693117   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:03.693363   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:49:03.709753   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:49:03.710031   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.4
	I1202 19:49:03.710040   85424 certs.go:195] generating shared ca certs ...
	I1202 19:49:03.710054   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:49:03.710179   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:49:03.710223   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:49:03.710229   85424 certs.go:257] generating profile certs ...
	I1202 19:49:03.710306   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:49:03.710371   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7aeb3685
	I1202 19:49:03.710427   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:49:03.710436   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:49:03.710521   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:49:03.710542   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:49:03.710554   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:49:03.710565   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:49:03.710577   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:49:03.710598   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:49:03.710610   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:49:03.710662   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:49:03.710695   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:49:03.710703   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:49:03.710730   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:49:03.710755   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:49:03.710778   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:49:03.710822   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:49:03.711042   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:49:03.711071   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:49:03.711083   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:03.711181   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:49:03.728781   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:49:03.830007   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:49:03.833942   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:49:03.842299   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:49:03.846144   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:49:03.854532   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:49:03.857855   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:49:03.866234   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:49:03.870642   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:49:03.879137   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:49:03.883549   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:49:03.893143   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:49:03.896763   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:49:03.904772   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:49:03.925546   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:49:03.951452   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:49:03.975797   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:49:03.998666   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:49:04.023000   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:49:04.042956   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:49:04.061815   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:49:04.081799   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:49:04.113304   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:49:04.131292   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:49:04.149359   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:49:04.163556   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:49:04.177001   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:49:04.191331   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:49:04.204195   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:49:04.216872   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:49:04.229341   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:49:04.242596   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:49:04.248724   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:49:04.256868   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260467   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260531   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.301235   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:49:04.308894   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:49:04.317175   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320635   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320703   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.362642   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:49:04.371073   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:49:04.379233   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383803   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383867   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.425589   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:49:04.433230   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:49:04.436905   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:49:04.478804   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:49:04.521202   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:49:04.562989   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:49:04.603885   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:49:04.644970   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:49:04.686001   85424 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1202 19:49:04.686142   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:49:04.686175   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:49:04.686225   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:49:04.698332   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:49:04.698392   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:49:04.698462   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:49:04.706596   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:49:04.706697   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:49:04.714019   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:49:04.726439   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:49:04.740943   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:49:04.755477   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:49:04.759442   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:04.769254   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:04.889322   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:04.903380   85424 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:49:04.903723   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:04.907146   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:49:04.910053   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:05.053002   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:05.069583   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:49:05.069742   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:49:05.070007   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m03" to be "Ready" ...
	W1202 19:49:07.074081   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:09.574441   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:12.073995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:14.075158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:16.574109   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:19.074269   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:21.573633   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:24.075532   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:26.573178   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:28.573751   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:30.574196   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:33.074433   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:35.574293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:38.074355   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:40.572995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:42.573766   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:44.574193   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:47.074875   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:49.574182   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:52.073848   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:54.074871   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:56.574461   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:59.074135   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:01.075025   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:03.573959   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:05.574229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:08.073434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:10.075308   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:12.573891   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:14.574258   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:17.075768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:19.574491   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:22.073796   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:24.074628   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:26.574014   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:29.073484   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:31.074366   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:33.077573   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:35.574409   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:38.074415   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:40.076462   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:42.573398   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:44.574236   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:47.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:49.574052   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:51.574295   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:53.574395   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:56.074579   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:58.573990   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:00.574496   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:03.074093   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:05.573622   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:07.574521   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:10.074177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:12.074658   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:14.574234   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:17.073779   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:19.074824   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:21.075177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:23.574226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:25.574533   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:28.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:30.573516   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:32.574725   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:35.073690   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:37.073844   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:39.074254   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:41.074445   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:43.574427   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:46.074495   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:48.075157   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:50.574559   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:53.074039   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:55.075518   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:57.574296   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:00.125095   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:02.573158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:04.574068   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:07.074149   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:09.573261   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:11.574325   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:14.074158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:16.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:18.578414   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:21.074856   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:23.573367   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:25.574018   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:28.073545   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:30.074750   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:32.074791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:34.573792   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:37.073884   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:39.074273   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:41.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:43.573239   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:45.574142   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:48.073730   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:50.074154   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:52.074293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:54.074677   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:56.574118   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:58.575322   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:01.074442   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:03.574221   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:06.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:08.573768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:10.574179   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:13.073867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:15.074575   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:17.581482   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:20.075478   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:22.574434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:25.079089   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:27.574074   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:30.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:32.573125   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:34.573275   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:36.573791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:39.075423   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:41.573386   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:43.573426   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:46.074050   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:48.074229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:50.574069   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:53.073917   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:55.573030   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:57.574590   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:00.099899   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:02.573639   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:04.573928   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:06.574012   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:08.574318   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:11.073394   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:13.074011   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:15.074319   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:17.573595   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:19.574170   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:22.074150   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:24.074500   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:26.573647   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:28.573867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:30.574160   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:33.074365   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:35.074585   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:37.574466   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:40.075645   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:42.573981   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:44.574615   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:46.576226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:49.074146   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:51.074479   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:53.574396   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:56.073822   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:58.074332   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:00.115264   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:02.573371   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:05.070625   85424 node_ready.go:55] error getting node "ha-791576-m03" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 19:55:05.070669   85424 node_ready.go:38] duration metric: took 6m0.000641476s for node "ha-791576-m03" to be "Ready" ...
	I1202 19:55:05.073996   85424 out.go:203] 
	W1202 19:55:05.077043   85424 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:55:05.077067   85424 out.go:285] * 
	W1202 19:55:05.079288   85424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:55:05.082165   85424 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.307348407Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.31130981Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.311346765Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171067478Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171146976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176647774Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176731103Z" level=info msg="Adding pod default_busybox-7b57f96db7-l5g8z to CNI network \"kindnet\" (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.190912432Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.191265818Z" level=info msg="Checking pod default_busybox-7b57f96db7-l5g8z for CNI network kindnet (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.194319023Z" level=info msg="Ran pod sandbox e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 with infra container: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195591573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195818661Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195924693Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.198155298Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.21349448Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.105132802Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.109539064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa56a44-9db8-4756-acdc-664a3a83dc98 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.115065486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa5b526-dbea-407e-8ac9-a7b0f9d1c48f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.123684127Z" level=info msg="Creating container: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12386251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.129280857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12990521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.146011626Z" level=info msg="Created container e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.147143527Z" level=info msg="Starting container: e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2" id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.151353561Z" level=info msg="Started container" PID=1519 containerID=e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2 description=default/busybox-7b57f96db7-l5g8z/busybox id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e51c0c263b11d       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   About a minute ago   Running             busybox                   0                   e289b1b32fb87       busybox-7b57f96db7-l5g8z            default
	c74c4f823da84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Running             storage-provisioner       3                   611ff54ac571a       storage-provisioner                 kube-system
	0ca58a409109c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      7 minutes ago        Running             kube-controller-manager   2                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	3335ad39bba28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   785cb0dfb8b28       coredns-66bc5c9577-w2245            kube-system
	406623e1d0127       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   fda4cb2ab460e       coredns-66bc5c9577-hw99j            kube-system
	e3e00e2da8bd7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      7 minutes ago        Running             kindnet-cni               1                   b3c174d7d003c       kindnet-m2l5j                       kube-system
	1ab649bc08ab0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Exited              storage-provisioner       2                   611ff54ac571a       storage-provisioner                 kube-system
	4f18d2c8cbb18       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      7 minutes ago        Running             kube-proxy                1                   5e76fe966d8bb       kube-proxy-q5vfv                    kube-system
	71e9ce78d6466       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70                                      8 minutes ago        Running             kube-vip                  0                   75d9a258d0378       kube-vip-ha-791576                  kube-system
	a18297fd12571       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      8 minutes ago        Running             kube-apiserver            1                   d2e111aee1d35       kube-apiserver-ha-791576            kube-system
	0e19b5bb45d9e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      8 minutes ago        Exited              kube-controller-manager   1                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	392beb226748f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      8 minutes ago        Running             etcd                      1                   d6f57a5f40b96       etcd-ha-791576                      kube-system
	a038e721d900d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      8 minutes ago        Running             kube-scheduler            1                   6a36f33b4c7e9       kube-scheduler-ha-791576            kube-system
	
	
	==> coredns [3335ad39bba28fdd293923b313dec13f1a33d55117eaf80083a781dff0d8bdea] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42808 - 26955 "HINFO IN 630864626443792637.4045400913318639804. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02501392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [406623e1d012777bc4fd0347ac8b3f005c55afa441ea4b81863c6c008ee30979] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47637 - 44081 "HINFO IN 8875301780668194042.4808208815551959978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019656625s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:55:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           8m52s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 8m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m17s (x8 over 8m17s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           7m18s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:55:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m34s                  kube-proxy       
	  Normal   Starting                 8m44s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Warning  CgroupV1                 9m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m26s (x8 over 9m27s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m26s (x8 over 9m27s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m26s (x8 over 9m27s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m52s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 8m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m13s (x8 over 8m13s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m13s (x8 over 8m13s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m13s (x8 over 8m13s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           7m18s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:46:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8zbzj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-4tffm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m52s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m49s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m18s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             6m59s              node-controller  Node ha-791576-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de] <==
	{"level":"info","ts":"2025-12-02T19:49:06.780208Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.809456Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"44eee1400a9a95d4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-02T19:49:06.809499Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877742Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877902Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.314765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:35732","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.4:35732: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-02T19:55:09.320863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:35734","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T19:55:09.405698Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4579929246608719274 12593026477526642892)"}
	{"level":"info","ts":"2025-12-02T19:55:09.407853Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"44eee1400a9a95d4","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-12-02T19:55:09.407978Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.408260Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408319Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.408590Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408680Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408873Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409036Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4","error":"context canceled"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409096Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"44eee1400a9a95d4","error":"failed to read 44eee1400a9a95d4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-12-02T19:55:09.409137Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409283Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4","error":"http: read on closed response body"}
	{"level":"info","ts":"2025-12-02T19:55:09.409336Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409372Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409415Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409476Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.436994Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.444288Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"44eee1400a9a95d4"}
	
	
	==> kernel <==
	 19:55:15 up  1:37,  0 user,  load average: 0.97, 1.28, 1.23
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3e00e2da8bd7227823a5aa7d6e5e4ac4d0b3b6254164b8f98c55f9fe1e0a41f] <==
	I1202 19:54:42.312918       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:42.313067       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:42.313115       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:54:52.295209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:54:52.295374       1 main.go:301] handling current node
	I1202 19:54:52.295398       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:54:52.295406       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:54:52.295573       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:54:52.295595       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:52.295666       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:52.295678       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:02.294739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:55:02.294851       1 main.go:301] handling current node
	I1202 19:55:02.294904       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:55:02.294921       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:55:02.295068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:55:02.295112       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:55:02.295205       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:55:02.295219       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:12.294908       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:55:12.294960       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:55:12.295111       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:55:12.295123       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:12.295184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:55:12.295243       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9] <==
	I1202 19:47:20.864636       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 19:47:20.866204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 19:47:20.866305       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 19:47:20.885386       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 19:47:20.888287       1 aggregator.go:171] initial CRD sync complete...
	I1202 19:47:20.888313       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 19:47:20.888321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 19:47:20.888328       1 cache.go:39] Caches are synced for autoregister controller
	I1202 19:47:20.889012       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 19:47:20.889287       1 cache.go:39] Caches are synced for LocalAvailability controller
	W1202 19:47:20.904276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1202 19:47:20.906209       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 19:47:20.916565       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1202 19:47:20.920922       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1202 19:47:20.934648       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 19:47:20.959819       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 19:47:20.961820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 19:47:20.967200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 19:47:20.968399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 19:47:21.496374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 19:47:21.505479       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1202 19:47:22.134237       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:47:27.016568       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:47:27.032930       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:47:27.190342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0ca58a409109c6cf93ecd9eb064e7f3091b3dd592f95be9877036c0d2bbfeb8d] <==
	I1202 19:47:26.656190       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576"
	I1202 19:47:26.656300       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m02"
	I1202 19:47:26.656402       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 19:47:26.660500       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 19:47:26.660768       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 19:47:26.663628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 19:47:26.674684       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 19:47:26.651025       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 19:47:26.678311       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 19:47:26.697935       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 19:47:26.687406       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 19:47:26.700695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:47:26.687552       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 19:47:26.687521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 19:47:26.687960       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 19:47:26.687543       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 19:47:26.816935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835171       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835200       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 19:47:26.835207       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 19:47:31.218533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-791576-m04"
	I1202 19:53:17.454482       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:53:17.454338       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-xjn7v"
	E1202 19:55:10.010887       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-791576-m03\", UID:\"8a5dba8e-9b76-4e87-9053-ac95beaf6643\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-791576-m03\", UID:\"530f9ded-0cfe-4563-953d-e3f475e6bf0e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-791576-m03\" not found" logger="UnhandledError"
	E1202 19:55:10.032494       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-791576-m03\", UID:\"5c6202c1-f485-4e0e-8c3a-f878b287a56b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-791576-m03\", UID:\"530f9ded-0cfe-4563-953d-e3f475e6bf0e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-791576-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44] <==
	I1202 19:47:00.439112       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:47:01.716830       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:47:01.716879       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:01.723350       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:47:01.723493       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:47:01.723581       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 19:47:01.723592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:21.737115       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [4f18d2c8cbb18519eff20cb6efdd106364f8f81f655e7d0e55cb89f551d5ed2f] <==
	I1202 19:47:22.149595       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:47:22.267082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:47:22.368012       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:47:22.368108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:47:22.368213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:47:22.406247       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:47:22.406301       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:47:22.411880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:47:22.412231       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:47:22.412424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:22.415564       1 config.go:200] "Starting service config controller"
	I1202 19:47:22.415619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:47:22.415683       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:47:22.415727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:47:22.415771       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:47:22.415809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:47:22.419448       1 config.go:309] "Starting node config controller"
	I1202 19:47:22.419524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:47:22.419556       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:47:22.515835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:47:22.515918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:47:22.515839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce] <==
	I1202 19:47:20.728119       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 19:47:20.728161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:20.745935       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.746041       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.747549       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:47:20.749738       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:47:20.809565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:47:20.809638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:47:20.809646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:47:20.809729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:47:20.809794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:47:20.809851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:47:20.809908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 19:47:20.809962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:47:20.810015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:47:20.810111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 19:47:20.810263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:47:20.810386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:47:20.813845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:47:20.813926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:47:20.813972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:47:20.814068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1202 19:47:20.946679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:47:21 ha-791576 kubelet[793]: E1202 19:47:21.014395     793 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-791576\" already exists" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.422624     793 apiserver.go:52] "Watching apiserver"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.426135     793 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-791576" podUID="1848798a-e3e5-49f2-a138-7a169024e0bd"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.440465     793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449304     793 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449333     793 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494517     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-xtables-lock\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494710     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a2e34ca-2f88-457c-8898-9cfbab53ca55-tmp\") pod \"storage-provisioner\" (UID: \"7a2e34ca-2f88-457c-8898-9cfbab53ca55\") " pod="kube-system/storage-provisioner"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494792     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-lib-modules\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494985     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-xtables-lock\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495131     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-lib-modules\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495164     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-cni-cfg\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.517177     793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.605158     793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-791576" podStartSLOduration=0.605139116 podStartE2EDuration="605.139116ms" podCreationTimestamp="2025-12-02 19:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 19:47:21.587118107 +0000 UTC m=+23.288113583" watchObservedRunningTime="2025-12-02 19:47:21.605139116 +0000 UTC m=+23.306134592"
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.780422     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681 WatchSource:0}: Error finding container 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681: Status 404 returned error can't find the container with id 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.810847     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d WatchSource:0}: Error finding container b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d: Status 404 returned error can't find the container with id b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.882017     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e WatchSource:0}: Error finding container fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e: Status 404 returned error can't find the container with id fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.507633     793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1a35affb644c5a6d9375f3959ef470" path="/var/lib/kubelet/pods/de1a35affb644c5a6d9375f3959ef470/volumes"
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.594773     793 scope.go:117] "RemoveContainer" containerID="0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	Dec 02 19:47:52 ha-791576 kubelet[793]: I1202 19:47:52.698574     793 scope.go:117] "RemoveContainer" containerID="1ab649bc08ab060742673f50eeb7c2a57ee5a4578e1a59eddd554c3ad6d7404e"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.432933     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.432988     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.433641     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.433699     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist"
	Dec 02 19:53:17 ha-791576 kubelet[793]: I1202 19:53:17.758545     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhfw\" (UniqueName: \"kubernetes.io/projected/9231dae8-fa3f-4719-aa0b-e2893cf7afe6-kube-api-access-gdhfw\") pod \"busybox-7b57f96db7-l5g8z\" (UID: \"9231dae8-fa3f-4719-aa0b-e2893cf7afe6\") " pod="default/busybox-7b57f96db7-l5g8z"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-k9bh8
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8
helpers_test.go:290: (dbg) kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-k9bh8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fp5lt (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fp5lt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  119s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint(s), 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  119s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint(s), 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-791576" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-791576\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-791576\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-791576\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 85549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:46:51.358682133Z",
	            "FinishedAt": "2025-12-02T19:46:50.744519975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d42040ea74c4eeedb7f84e603f4c2848e2cd3d94b7edd53b3686d82839a44349",
	            "SandboxKey": "/var/run/docker/netns/d42040ea74c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f0:35:b9:8a:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "0de808d6cef38a4c373fb171d1e5a929c71554ad4cf487786793c13d6a707020",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.308208592s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:46:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:46:51.075692   85424 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:46:51.075825   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.075836   85424 out.go:374] Setting ErrFile to fd 2...
	I1202 19:46:51.075841   85424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:46:51.076149   85424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:46:51.076551   85424 out.go:368] Setting JSON to false
	I1202 19:46:51.077367   85424 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5349,"bootTime":1764699462,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:46:51.077442   85424 start.go:143] virtualization:  
	I1202 19:46:51.082662   85424 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:46:51.085642   85424 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:46:51.085706   85424 notify.go:221] Checking for updates...
	I1202 19:46:51.091665   85424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:46:51.094539   85424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:51.097403   85424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:46:51.100336   85424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:46:51.103289   85424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:46:51.106849   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:51.106965   85424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:46:51.138890   85424 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:46:51.139003   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.198061   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.188947665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.198169   85424 docker.go:319] overlay module found
	I1202 19:46:51.201303   85424 out.go:179] * Using the docker driver based on existing profile
	I1202 19:46:51.204063   85424 start.go:309] selected driver: docker
	I1202 19:46:51.204087   85424 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.204223   85424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:46:51.204328   85424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:46:51.266558   85424 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:46:51.256321599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:46:51.266979   85424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:46:51.267013   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:51.267084   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:51.267148   85424 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:51.272255   85424 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:46:51.275067   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:51.277961   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:51.280789   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:51.280839   85424 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:46:51.280871   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:51.280873   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:51.280964   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:51.280974   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:51.281126   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.300000   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:51.300023   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:51.300050   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:51.300081   85424 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:51.300153   85424 start.go:364] duration metric: took 46.004µs to acquireMachinesLock for "ha-791576"
	I1202 19:46:51.300175   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:51.300183   85424 fix.go:54] fixHost starting: 
	I1202 19:46:51.300454   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.316816   85424 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:46:51.316845   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:51.320143   85424 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:46:51.320230   85424 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:46:51.575902   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:46:51.594134   85424 kic.go:430] container "ha-791576" state is running.
	I1202 19:46:51.594514   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:51.619517   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:51.619754   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:51.619817   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:51.639059   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:51.639374   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:51.639778   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:51.641510   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35428->127.0.0.1:32813: read: connection reset by peer
	I1202 19:46:54.791183   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.791204   85424 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:46:54.791275   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.809134   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.809441   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.809458   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:46:54.966477   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:46:54.966565   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:54.984050   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:54.984375   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:54.984402   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:46:55.137902   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:46:55.137928   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:46:55.138006   85424 ubuntu.go:190] setting up certificates
	I1202 19:46:55.138016   85424 provision.go:84] configureAuth start
	I1202 19:46:55.138084   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:55.155651   85424 provision.go:143] copyHostCerts
	I1202 19:46:55.155701   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155740   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:46:55.155758   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:46:55.155836   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:46:55.155925   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155955   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:46:55.155965   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:46:55.155993   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:46:55.156051   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156071   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:46:55.156082   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:46:55.156108   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:46:55.156162   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:46:55.641637   85424 provision.go:177] copyRemoteCerts
	I1202 19:46:55.641717   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:46:55.641763   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.660498   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:55.765103   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:46:55.765169   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:46:55.782097   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:46:55.782154   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:46:55.798837   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:46:55.798898   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:46:55.816023   85424 provision.go:87] duration metric: took 677.979406ms to configureAuth
	I1202 19:46:55.816052   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:46:55.816326   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:55.816455   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:55.833499   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:55.833854   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1202 19:46:55.833876   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:46:56.249298   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:46:56.249319   85424 machine.go:97] duration metric: took 4.629549894s to provisionDockerMachine
	I1202 19:46:56.249331   85424 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:46:56.249341   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:46:56.249400   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:46:56.249454   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.268549   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.373420   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:46:56.376533   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:46:56.376562   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:46:56.376586   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:46:56.376642   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:46:56.376760   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:46:56.376771   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:46:56.376874   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:46:56.383745   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:56.400262   85424 start.go:296] duration metric: took 150.916843ms for postStartSetup
	I1202 19:46:56.400381   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:46:56.400460   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.420055   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.522566   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:46:56.527172   85424 fix.go:56] duration metric: took 5.226983089s for fixHost
	I1202 19:46:56.527198   85424 start.go:83] releasing machines lock for "ha-791576", held for 5.227032622s
	I1202 19:46:56.527261   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:46:56.543387   85424 ssh_runner.go:195] Run: cat /version.json
	I1202 19:46:56.543430   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:46:56.543494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.543434   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:46:56.561404   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.561708   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:46:56.749544   85424 ssh_runner.go:195] Run: systemctl --version
	I1202 19:46:56.755696   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:46:56.790499   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:46:56.794459   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:46:56.794568   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:46:56.801919   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:46:56.801941   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:46:56.801971   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:46:56.802028   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:46:56.816910   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:46:56.829587   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:46:56.829715   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:46:56.844766   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:46:56.857092   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:46:56.975356   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:46:57.091555   85424 docker.go:234] disabling docker service ...
	I1202 19:46:57.091665   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:46:57.106660   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:46:57.120539   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:46:57.239669   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:46:57.366517   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:46:57.382471   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:46:57.396694   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:46:57.396813   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.405941   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:46:57.406053   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.415370   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.424417   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.433387   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:46:57.442311   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.451228   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.459398   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:46:57.468002   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:46:57.475168   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:46:57.482408   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:57.597548   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:46:57.804313   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:46:57.804451   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:46:57.808320   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:46:57.808445   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:46:57.812025   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:46:57.839390   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:46:57.839543   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.867354   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:46:57.901354   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:46:57.904220   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:46:57.920051   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:46:57.923689   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:57.933012   85424 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:46:57.933164   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:57.933217   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.967565   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.967590   85424 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:46:57.967641   85424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:46:57.994848   85424 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:46:57.994872   85424 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:46:57.994881   85424 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:46:57.994976   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:46:57.995055   85424 ssh_runner.go:195] Run: crio config
	I1202 19:46:58.061390   85424 cni.go:84] Creating CNI manager for ""
	I1202 19:46:58.061418   85424 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 19:46:58.061446   85424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:46:58.061470   85424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:46:58.061604   85424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:46:58.061624   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:46:58.061690   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:46:58.074421   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:58.074559   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:46:58.074648   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:46:58.083182   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:46:58.083291   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:46:58.091465   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:46:58.104313   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:46:58.118107   85424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:46:58.130768   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:46:58.143041   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:46:58.146530   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:46:58.155934   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:46:58.272546   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:46:58.287479   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:46:58.287498   85424 certs.go:195] generating shared ca certs ...
	I1202 19:46:58.287513   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.287678   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:46:58.287718   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:46:58.287725   85424 certs.go:257] generating profile certs ...
	I1202 19:46:58.287810   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:46:58.287835   85424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad
	I1202 19:46:58.287850   85424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1202 19:46:58.432480   85424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad ...
	I1202 19:46:58.432627   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad: {Name:mkc49591a089fa34cc904adb89cfa288cc2b970e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.432873   85424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad ...
	I1202 19:46:58.432910   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad: {Name:mk0be3cbf6db1780ac4ac275259d854f38f2158a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:58.433068   85424 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:46:58.433251   85424 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.243a7cad -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:46:58.433443   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:46:58.433477   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:46:58.433511   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:46:58.433556   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:46:58.433591   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:46:58.433624   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:46:58.433685   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:46:58.433721   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:46:58.433750   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:46:58.433833   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:46:58.433893   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:46:58.433920   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:46:58.433994   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:46:58.434052   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:46:58.434132   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:46:58.434225   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:46:58.434290   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.434337   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.434370   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.443939   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:46:58.463785   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:46:58.486458   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:46:58.508445   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:46:58.530317   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:46:58.548462   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:46:58.568358   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:46:58.586970   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:46:58.604714   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:46:58.627145   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:46:58.645042   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:46:58.663909   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:46:58.676006   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:46:58.681961   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:46:58.689749   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693060   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.693152   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:46:58.735524   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:46:58.745065   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:46:58.754338   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759068   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.759143   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:46:58.803928   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:46:58.811507   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:46:58.819506   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823153   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.823249   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:46:58.865967   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:46:58.874198   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:46:58.878028   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:46:58.919236   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:46:58.961187   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:46:59.007842   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:46:59.061600   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:46:59.127987   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:46:59.207795   85424 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:46:59.207925   85424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:46:59.207988   85424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:46:59.265803   85424 cri.go:89] found id: "71e9ce78d64661ac6d00283cdb79e431fdb65c5c2f57fa8aaa18d21677420d38"
	I1202 19:46:59.265827   85424 cri.go:89] found id: "a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9"
	I1202 19:46:59.265833   85424 cri.go:89] found id: "0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	I1202 19:46:59.265836   85424 cri.go:89] found id: "392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de"
	I1202 19:46:59.265840   85424 cri.go:89] found id: "a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce"
	I1202 19:46:59.265843   85424 cri.go:89] found id: ""
	I1202 19:46:59.265890   85424 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:46:59.290356   85424 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:46:59Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:46:59.290428   85424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:46:59.301612   85424 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:46:59.301633   85424 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:46:59.301705   85424 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:46:59.310893   85424 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:46:59.311284   85424 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.311384   85424 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:46:59.311696   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.312205   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:46:59.312709   85424 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:46:59.312741   85424 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:46:59.312748   85424 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:46:59.312753   85424 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:46:59.312758   85424 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:46:59.313075   85424 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:46:59.313166   85424 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:46:59.323603   85424 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:46:59.323629   85424 kubeadm.go:602] duration metric: took 21.981794ms to restartPrimaryControlPlane
	I1202 19:46:59.323638   85424 kubeadm.go:403] duration metric: took 115.854562ms to StartCluster
	I1202 19:46:59.323653   85424 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.323714   85424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:46:59.324315   85424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:46:59.324515   85424 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:46:59.324543   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:46:59.324556   85424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:46:59.325058   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.330563   85424 out.go:179] * Enabled addons: 
	I1202 19:46:59.333607   85424 addons.go:530] duration metric: took 9.049214ms for enable addons: enabled=[]
	I1202 19:46:59.333674   85424 start.go:247] waiting for cluster config update ...
	I1202 19:46:59.333687   85424 start.go:256] writing updated cluster config ...
	I1202 19:46:59.337224   85424 out.go:203] 
	I1202 19:46:59.340497   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:46:59.340616   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.343973   85424 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:46:59.346800   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:46:59.349828   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:46:59.352721   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:46:59.352753   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:46:59.352862   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:46:59.352879   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:46:59.353002   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.353206   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:46:59.379004   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:46:59.379030   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:46:59.379043   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:46:59.379066   85424 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:46:59.379121   85424 start.go:364] duration metric: took 35.265µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:46:59.379145   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:46:59.379150   85424 fix.go:54] fixHost starting: m02
	I1202 19:46:59.379415   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.419284   85424 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:46:59.419317   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:46:59.422504   85424 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:46:59.422616   85424 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:46:59.837868   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:46:59.874389   85424 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:46:59.874756   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:46:59.901234   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:46:59.901470   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:46:59.901529   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:46:59.939434   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:46:59.939741   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:46:59.939756   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:46:59.941956   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:47:03.181981   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.182010   85424 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:47:03.182083   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.211290   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.211596   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.211614   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:47:03.424005   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:47:03.424078   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.477630   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:03.477958   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:03.477977   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:03.677990   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:03.678027   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:03.678048   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:03.678060   85424 provision.go:84] configureAuth start
	I1202 19:47:03.678128   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:03.701231   85424 provision.go:143] copyHostCerts
	I1202 19:47:03.701274   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701304   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:03.701318   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:03.701396   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:03.701478   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701500   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:03.701510   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:03.701537   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:03.701637   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701668   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:03.701674   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:03.701705   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:03.701761   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:47:03.945165   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:03.945235   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:03.945280   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:03.975366   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.102132   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:04.102208   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:04.134543   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:04.134604   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:04.161226   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:04.161297   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:04.192644   85424 provision.go:87] duration metric: took 514.571013ms to configureAuth
	I1202 19:47:04.192676   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:04.192912   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:04.193014   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.219315   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:04.219619   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1202 19:47:04.219638   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:04.675291   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:04.675356   85424 machine.go:97] duration metric: took 4.773873492s to provisionDockerMachine
	I1202 19:47:04.675373   85424 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:47:04.675386   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:04.675452   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:04.675498   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.694108   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.797554   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:04.800903   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:04.800934   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:04.800945   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:04.801002   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:04.801077   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:04.801089   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:04.801185   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:04.808567   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:04.826419   85424 start.go:296] duration metric: took 151.029848ms for postStartSetup
	I1202 19:47:04.826519   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:04.826573   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.843360   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:04.943115   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:04.948188   85424 fix.go:56] duration metric: took 5.569031295s for fixHost
	I1202 19:47:04.948214   85424 start.go:83] releasing machines lock for "ha-791576-m02", held for 5.56907917s
	I1202 19:47:04.948279   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:47:04.970572   85424 out.go:179] * Found network options:
	I1202 19:47:04.973538   85424 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:47:04.976397   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:04.976445   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:04.976513   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:04.976562   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.976885   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:04.976937   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:47:04.998993   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.000433   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:47:05.146894   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:05.207886   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:05.207960   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:05.215827   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:05.215855   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:05.215923   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:05.215992   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:05.231545   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:05.245040   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:05.245102   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:05.260499   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:05.273511   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:05.399821   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:05.547719   85424 docker.go:234] disabling docker service ...
	I1202 19:47:05.547833   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:05.574826   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:05.600862   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:05.835995   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:06.044894   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:06.061431   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:06.092815   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:06.092932   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.102629   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:06.102737   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.112408   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.122046   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.131510   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:06.140127   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.149293   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.162481   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:06.173417   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:06.181633   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:06.189368   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:06.407349   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:47:06.656582   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:47:06.656693   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:47:06.660537   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:47:06.660607   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:47:06.664156   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:47:06.693772   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:47:06.693853   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.722024   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:47:06.754035   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:47:06.757007   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:47:06.759990   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:47:06.777500   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:47:06.781343   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:06.791187   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:47:06.791444   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:06.791707   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:47:06.808279   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:47:06.808561   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:47:06.808576   85424 certs.go:195] generating shared ca certs ...
	I1202 19:47:06.808596   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:47:06.808787   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:47:06.808843   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:47:06.808854   85424 certs.go:257] generating profile certs ...
	I1202 19:47:06.808932   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:47:06.808997   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7b209479
	I1202 19:47:06.809041   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:47:06.809055   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:47:06.809070   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:47:06.809087   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:47:06.809100   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:47:06.809110   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:47:06.809124   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:47:06.809139   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:47:06.809152   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:47:06.809203   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:47:06.809238   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:47:06.809249   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:47:06.809275   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:47:06.809305   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:47:06.809331   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:47:06.809375   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:06.809409   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:06.809426   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:47:06.809437   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:47:06.809494   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:47:06.826818   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:47:06.926038   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:47:06.930094   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:47:06.938514   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:47:06.942246   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:47:06.951163   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:47:06.954843   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:47:06.962999   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:47:06.966675   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:47:06.975178   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:47:06.978885   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:47:06.987509   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:47:06.990939   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:47:06.999005   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:47:07.017141   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:47:07.034232   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:47:07.052223   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:47:07.068874   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:47:07.085118   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:47:07.102568   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:47:07.119624   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:47:07.137149   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:47:07.155661   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:47:07.174795   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:47:07.191770   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:47:07.204561   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:47:07.217443   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:47:07.230339   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:47:07.242695   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:47:07.255417   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:47:07.267762   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:47:07.280304   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:47:07.286551   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:47:07.294800   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298454   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.298514   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:47:07.338926   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:47:07.346584   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:47:07.354270   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358006   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.358069   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:47:07.398667   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:47:07.406676   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:47:07.414843   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419161   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.419247   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:47:07.460207   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:47:07.467798   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:47:07.471321   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:47:07.514285   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:47:07.561278   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:47:07.603224   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:47:07.644697   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:47:07.686079   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:47:07.727346   85424 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:47:07.727470   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:47:07.727522   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:47:07.727601   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:47:07.740480   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:47:07.740546   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:47:07.740622   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:47:07.748776   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:47:07.748850   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:47:07.756859   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:47:07.770007   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:47:07.782397   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:47:07.795978   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:47:07.799804   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:47:07.808809   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:07.936978   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:07.950174   85424 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:47:07.950576   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:07.954257   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:47:07.957286   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:08.088938   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:47:08.104389   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:47:08.104523   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:47:08.104787   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	W1202 19:47:18.106667   85424 node_ready.go:55] error getting node "ha-791576-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-791576-m02": net/http: TLS handshake timeout
	I1202 19:47:20.815620   85424 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:47:20.815646   85424 node_ready.go:38] duration metric: took 12.710819831s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:47:20.815659   85424 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:47:20.815715   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.316644   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:21.816110   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.315948   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:22.815840   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.316118   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.815903   85424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:47:23.838753   85424 api_server.go:72] duration metric: took 15.888533132s to wait for apiserver process to appear ...
	I1202 19:47:23.838776   85424 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:47:23.838807   85424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:47:23.866609   85424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:47:23.870765   85424 api_server.go:141] control plane version: v1.34.2
	I1202 19:47:23.870793   85424 api_server.go:131] duration metric: took 32.004959ms to wait for apiserver health ...
	I1202 19:47:23.870804   85424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:47:23.889009   85424 system_pods.go:59] 26 kube-system pods found
	I1202 19:47:23.889120   85424 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889176   85424 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.889202   85424 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.889222   85424 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.889255   85424 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.889279   85424 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.889300   85424 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.889339   85424 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.889361   85424 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.889396   85424 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889439   85424 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.889463   85424 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.889517   85424 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889553   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.889589   85424 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.889612   85424 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.889629   85424 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.889649   85424 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.889703   85424 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.889730   85424 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.889767   85424 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.889789   85424 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.889813   85424 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.889853   85424 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.889881   85424 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.889945   85424 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.889982   85424 system_pods.go:74] duration metric: took 19.17073ms to wait for pod list to return data ...
	I1202 19:47:23.890015   85424 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:47:23.903242   85424 default_sa.go:45] found service account: "default"
	I1202 19:47:23.903345   85424 default_sa.go:55] duration metric: took 13.295846ms for default service account to be created ...
	I1202 19:47:23.903390   85424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:47:23.918952   85424 system_pods.go:86] 26 kube-system pods found
	I1202 19:47:23.919047   85424 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919079   85424 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:47:23.919121   85424 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:47:23.919147   85424 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:47:23.919165   85424 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:47:23.919210   85424 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:47:23.919234   85424 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:47:23.919257   85424 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:47:23.919293   85424 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:47:23.919328   85424 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919349   85424 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:47:23.919407   85424 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:47:23.919452   85424 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919498   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:47:23.919527   85424 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:47:23.919571   85424 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:47:23.919594   85424 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:47:23.919611   85424 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:47:23.919658   85424 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:47:23.919681   85424 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:47:23.919700   85424 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:47:23.919737   85424 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:47:23.919770   85424 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:47:23.919789   85424 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:47:23.919824   85424 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:47:23.919853   85424 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:47:23.919880   85424 system_pods.go:126] duration metric: took 16.439891ms to wait for k8s-apps to be running ...
	I1202 19:47:23.919920   85424 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:47:23.920039   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:47:23.943430   85424 system_svc.go:56] duration metric: took 23.498391ms WaitForService to wait for kubelet
	I1202 19:47:23.943548   85424 kubeadm.go:587] duration metric: took 15.993331779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:47:23.943620   85424 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:47:23.963377   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963414   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963434   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963440   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963444   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963448   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963453   85424 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:47:23.963456   85424 node_conditions.go:123] node cpu capacity is 2
	I1202 19:47:23.963461   85424 node_conditions.go:105] duration metric: took 19.808046ms to run NodePressure ...
	I1202 19:47:23.963474   85424 start.go:242] waiting for startup goroutines ...
	I1202 19:47:23.963497   85424 start.go:256] writing updated cluster config ...
	I1202 19:47:23.966956   85424 out.go:203] 
	I1202 19:47:23.970081   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:23.970200   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:23.973545   85424 out.go:179] * Starting "ha-791576-m03" control-plane node in "ha-791576" cluster
	I1202 19:47:23.977222   85424 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:47:23.980067   85424 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:47:23.982893   85424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:47:23.982917   85424 cache.go:65] Caching tarball of preloaded images
	I1202 19:47:23.982945   85424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:47:23.983271   85424 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:47:23.983306   85424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:47:23.983500   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.032012   85424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:47:24.032039   85424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:47:24.032056   85424 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:47:24.032084   85424 start.go:360] acquireMachinesLock for ha-791576-m03: {Name:mke11e8197b1eb1f85f8abb689432afa86afcde6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:47:24.032155   85424 start.go:364] duration metric: took 54.948µs to acquireMachinesLock for "ha-791576-m03"
	I1202 19:47:24.032184   85424 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:47:24.032191   85424 fix.go:54] fixHost starting: m03
	I1202 19:47:24.032519   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.061731   85424 fix.go:112] recreateIfNeeded on ha-791576-m03: state=Stopped err=<nil>
	W1202 19:47:24.061757   85424 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:47:24.064925   85424 out.go:252] * Restarting existing docker container for "ha-791576-m03" ...
	I1202 19:47:24.065009   85424 cli_runner.go:164] Run: docker start ha-791576-m03
	I1202 19:47:24.481554   85424 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:47:24.511641   85424 kic.go:430] container "ha-791576-m03" state is running.
	I1202 19:47:24.512003   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:24.552004   85424 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:47:24.552243   85424 machine.go:94] provisionDockerMachine start ...
	I1202 19:47:24.552303   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:24.583210   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:24.583581   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:24.583591   85424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:47:24.584229   85424 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47380->127.0.0.1:32823: read: connection reset by peer
	I1202 19:47:27.831905   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:27.832023   85424 ubuntu.go:182] provisioning hostname "ha-791576-m03"
	I1202 19:47:27.832106   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:27.866228   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:27.866528   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:27.866538   85424 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m03 && echo "ha-791576-m03" | sudo tee /etc/hostname
	I1202 19:47:28.206271   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m03
	
	I1202 19:47:28.206429   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.235744   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:28.236058   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:28.236081   85424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:47:28.537696   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:47:28.537727   85424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:47:28.537745   85424 ubuntu.go:190] setting up certificates
	I1202 19:47:28.537786   85424 provision.go:84] configureAuth start
	I1202 19:47:28.537865   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:28.575346   85424 provision.go:143] copyHostCerts
	I1202 19:47:28.575393   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575433   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:47:28.575445   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:47:28.575528   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:47:28.575619   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575644   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:47:28.575649   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:47:28.575682   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:47:28.575735   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575759   85424 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:47:28.575763   85424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:47:28.575791   85424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:47:28.575848   85424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m03 san=[127.0.0.1 192.168.49.4 ha-791576-m03 localhost minikube]
	I1202 19:47:28.737231   85424 provision.go:177] copyRemoteCerts
	I1202 19:47:28.737301   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:47:28.737343   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:28.767082   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:28.894686   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:47:28.894758   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:47:28.937222   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:47:28.937295   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:47:29.025224   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:47:29.025298   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:47:29.085079   85424 provision.go:87] duration metric: took 547.273818ms to configureAuth
	I1202 19:47:29.085116   85424 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:47:29.085371   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:47:29.085483   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.111990   85424 main.go:143] libmachine: Using SSH client type: native
	I1202 19:47:29.112296   85424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1202 19:47:29.112318   85424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:47:29.803395   85424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:47:29.803431   85424 machine.go:97] duration metric: took 5.251179236s to provisionDockerMachine
	I1202 19:47:29.803442   85424 start.go:293] postStartSetup for "ha-791576-m03" (driver="docker")
	I1202 19:47:29.803453   85424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:47:29.803521   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:47:29.803574   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:29.833575   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:29.954416   85424 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:47:29.960020   85424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:47:29.960062   85424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:47:29.960082   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:47:29.960151   85424 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:47:29.960229   85424 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:47:29.960240   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:47:29.960341   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:47:29.982991   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:47:30.035283   85424 start.go:296] duration metric: took 231.823498ms for postStartSetup
	I1202 19:47:30.035374   85424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:47:30.035419   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.070768   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.190107   85424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:47:30.196635   85424 fix.go:56] duration metric: took 6.164437606s for fixHost
	I1202 19:47:30.196666   85424 start.go:83] releasing machines lock for "ha-791576-m03", held for 6.164502097s
	I1202 19:47:30.196744   85424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:47:30.237763   85424 out.go:179] * Found network options:
	I1202 19:47:30.240640   85424 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:47:30.243436   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243469   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243493   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:47:30.243503   85424 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:47:30.243571   85424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:47:30.243615   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.243653   85424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:47:30.243712   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:47:30.273326   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.286780   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:47:30.653045   85424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:47:30.787771   85424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:47:30.787854   85424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:47:30.833087   85424 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:47:30.833158   85424 start.go:496] detecting cgroup driver to use...
	I1202 19:47:30.833206   85424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:47:30.833279   85424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:47:30.864249   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:47:30.889806   85424 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:47:30.889863   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:47:30.917840   85424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:47:30.984243   85424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:47:31.253878   85424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:47:31.593901   85424 docker.go:234] disabling docker service ...
	I1202 19:47:31.594010   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:47:31.621301   85424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:47:31.661349   85424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:47:32.003626   85424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:47:32.391869   85424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:47:32.435757   85424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:47:32.493110   85424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:47:32.493217   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.524849   85424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:47:32.524962   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.565517   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.598569   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.641426   85424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:47:32.662712   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.677733   85424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.714192   85424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:47:32.736481   85424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:47:32.750823   85424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:47:32.766296   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:47:33.098331   85424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:49:03.522289   85424 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.42388116s)
	I1202 19:49:03.522317   85424 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:49:03.522385   85424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:49:03.526524   85424 start.go:564] Will wait 60s for crictl version
	I1202 19:49:03.526585   85424 ssh_runner.go:195] Run: which crictl
	I1202 19:49:03.530326   85424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:49:03.571925   85424 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:49:03.572010   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.609479   85424 ssh_runner.go:195] Run: crio --version
	I1202 19:49:03.650610   85424 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:49:03.653540   85424 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:49:03.656557   85424 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:49:03.659527   85424 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:49:03.677810   85424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:49:03.681792   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:03.692859   85424 mustload.go:66] Loading cluster: ha-791576
	I1202 19:49:03.693117   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:03.693363   85424 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:49:03.709753   85424 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:49:03.710031   85424 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.4
	I1202 19:49:03.710040   85424 certs.go:195] generating shared ca certs ...
	I1202 19:49:03.710054   85424 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:49:03.710179   85424 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:49:03.710223   85424 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:49:03.710229   85424 certs.go:257] generating profile certs ...
	I1202 19:49:03.710306   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:49:03.710371   85424 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.7aeb3685
	I1202 19:49:03.710427   85424 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:49:03.710436   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:49:03.710521   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:49:03.710542   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:49:03.710554   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:49:03.710565   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:49:03.710577   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:49:03.710598   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:49:03.710610   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:49:03.710662   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:49:03.710695   85424 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:49:03.710703   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:49:03.710730   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:49:03.710755   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:49:03.710778   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:49:03.710822   85424 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:49:03.711042   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:49:03.711071   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:49:03.711083   85424 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:03.711181   85424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:49:03.728781   85424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:49:03.830007   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:49:03.833942   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:49:03.842299   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:49:03.846144   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:49:03.854532   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:49:03.857855   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:49:03.866234   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:49:03.870642   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:49:03.879137   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:49:03.883549   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:49:03.893143   85424 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:49:03.896763   85424 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:49:03.904772   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:49:03.925546   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:49:03.951452   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:49:03.975797   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:49:03.998666   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 19:49:04.023000   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:49:04.042956   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:49:04.061815   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:49:04.081799   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:49:04.113304   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:49:04.131292   85424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:49:04.149359   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:49:04.163556   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:49:04.177001   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:49:04.191331   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:49:04.204195   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:49:04.216872   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:49:04.229341   85424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:49:04.242596   85424 ssh_runner.go:195] Run: openssl version
	I1202 19:49:04.248724   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:49:04.256868   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260467   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.260531   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:49:04.301235   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:49:04.308894   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:49:04.317175   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320635   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.320703   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:49:04.362642   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:49:04.371073   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:49:04.379233   85424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383803   85424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.383867   85424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:49:04.425589   85424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:49:04.433230   85424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:49:04.436905   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:49:04.478804   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:49:04.521202   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:49:04.562989   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:49:04.603885   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:49:04.644970   85424 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:49:04.686001   85424 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1202 19:49:04.686142   85424 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:49:04.686175   85424 kube-vip.go:115] generating kube-vip config ...
	I1202 19:49:04.686225   85424 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:49:04.698332   85424 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:49:04.698392   85424 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:49:04.698462   85424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:49:04.706596   85424 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:49:04.706697   85424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:49:04.714019   85424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:49:04.726439   85424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:49:04.740943   85424 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:49:04.755477   85424 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:49:04.759442   85424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:49:04.769254   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:04.889322   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:04.903380   85424 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:49:04.903723   85424 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:49:04.907146   85424 out.go:179] * Verifying Kubernetes components...
	I1202 19:49:04.910053   85424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:49:05.053002   85424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:49:05.069583   85424 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:49:05.069742   85424 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:49:05.070007   85424 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m03" to be "Ready" ...
	W1202 19:49:07.074081   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:09.574441   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:12.073995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:14.075158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:16.574109   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:19.074269   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:21.573633   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:24.075532   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:26.573178   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:28.573751   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:30.574196   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:33.074433   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:35.574293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:38.074355   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:40.572995   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:42.573766   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:44.574193   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:47.074875   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:49.574182   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:52.073848   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:54.074871   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:56.574461   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:49:59.074135   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:01.075025   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:03.573959   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:05.574229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:08.073434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:10.075308   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:12.573891   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:14.574258   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:17.075768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:19.574491   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:22.073796   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:24.074628   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:26.574014   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:29.073484   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:31.074366   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:33.077573   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:35.574409   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:38.074415   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:40.076462   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:42.573398   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:44.574236   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:47.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:49.574052   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:51.574295   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:53.574395   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:56.074579   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:50:58.573990   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:00.574496   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:03.074093   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:05.573622   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:07.574521   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:10.074177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:12.074658   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:14.574234   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:17.073779   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:19.074824   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:21.075177   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:23.574226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:25.574533   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:28.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:30.573516   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:32.574725   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:35.073690   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:37.073844   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:39.074254   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:41.074445   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:43.574427   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:46.074495   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:48.075157   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:50.574559   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:53.074039   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:55.075518   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:51:57.574296   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:00.125095   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:02.573158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:04.574068   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:07.074149   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:09.573261   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:11.574325   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:14.074158   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:16.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:18.578414   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:21.074856   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:23.573367   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:25.574018   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:28.073545   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:30.074750   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:32.074791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:34.573792   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:37.073884   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:39.074273   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:41.074719   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:43.573239   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:45.574142   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:48.073730   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:50.074154   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:52.074293   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:54.074677   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:56.574118   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:52:58.575322   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:01.074442   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:03.574221   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:06.074487   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:08.573768   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:10.574179   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:13.073867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:15.074575   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:17.581482   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:20.075478   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:22.574434   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:25.079089   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:27.574074   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:30.074259   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:32.573125   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:34.573275   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:36.573791   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:39.075423   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:41.573386   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:43.573426   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:46.074050   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:48.074229   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:50.574069   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:53.073917   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:55.573030   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:53:57.574590   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:00.099899   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:02.573639   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:04.573928   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:06.574012   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:08.574318   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:11.073394   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:13.074011   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:15.074319   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:17.573595   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:19.574170   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:22.074150   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:24.074500   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:26.573647   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:28.573867   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:30.574160   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:33.074365   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:35.074585   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:37.574466   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:40.075645   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:42.573981   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:44.574615   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:46.576226   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:49.074146   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:51.074479   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:53.574396   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:56.073822   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:54:58.074332   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:00.115264   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:02.573371   85424 node_ready.go:57] node "ha-791576-m03" has "Ready":"Unknown" status (will retry)
	W1202 19:55:05.070625   85424 node_ready.go:55] error getting node "ha-791576-m03" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1202 19:55:05.070669   85424 node_ready.go:38] duration metric: took 6m0.000641476s for node "ha-791576-m03" to be "Ready" ...
	I1202 19:55:05.073996   85424 out.go:203] 
	W1202 19:55:05.077043   85424 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 19:55:05.077067   85424 out.go:285] * 
	W1202 19:55:05.079288   85424 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 19:55:05.082165   85424 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.307348407Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.31130981Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:48:02 ha-791576 crio[653]: time="2025-12-02T19:48:02.311346765Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171067478Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.171146976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176647774Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.176731103Z" level=info msg="Adding pod default_busybox-7b57f96db7-l5g8z to CNI network \"kindnet\" (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.190912432Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-l5g8z Namespace:default ID:e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 UID:9231dae8-fa3f-4719-aa0b-e2893cf7afe6 NetNS:/var/run/netns/7fb14d7a-6c4f-4e81-940a-0b966199ab09 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000785c8}] Aliases:map[]}"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.191265818Z" level=info msg="Checking pod default_busybox-7b57f96db7-l5g8z for CNI network kindnet (type=ptp)"
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.194319023Z" level=info msg="Ran pod sandbox e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4 with infra container: default/busybox-7b57f96db7-l5g8z/POD" id=3186c4cc-fc42-4e21-9951-8f685af60ef9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195591573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195818661Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.195924693Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=145d4fe7-c180-46ce-b213-44b2423823a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.198155298Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:18 ha-791576 crio[653]: time="2025-12-02T19:53:18.21349448Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.105132802Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=6aac8e64-e8fe-4d24-8b7c-6bfee82ead34 name=/runtime.v1.ImageService/PullImage
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.109539064Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa56a44-9db8-4756-acdc-664a3a83dc98 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.115065486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2fa5b526-dbea-407e-8ac9-a7b0f9d1c48f name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.123684127Z" level=info msg="Creating container: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12386251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.129280857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.12990521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.146011626Z" level=info msg="Created container e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2: default/busybox-7b57f96db7-l5g8z/busybox" id=56f9dc5c-2dab-42d0-8b1a-c7a9d3167a95 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.147143527Z" level=info msg="Starting container: e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2" id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:53:20 ha-791576 crio[653]: time="2025-12-02T19:53:20.151353561Z" level=info msg="Started container" PID=1519 containerID=e51c0c263b11dc743039e0fbacea43bb1879ee07b10e07761f8aeb5fa6995ae2 description=default/busybox-7b57f96db7-l5g8z/busybox id=e10cf319-014f-4dc6-80be-da1936659c45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e289b1b32fb874bafd23ca7b4d5935990b8e77c0c48373b157f79bf6077758c4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	e51c0c263b11d       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   About a minute ago   Running             busybox                   0                   e289b1b32fb87       busybox-7b57f96db7-l5g8z            default
	c74c4f823da84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Running             storage-provisioner       3                   611ff54ac571a       storage-provisioner                 kube-system
	0ca58a409109c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      7 minutes ago        Running             kube-controller-manager   2                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	3335ad39bba28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   785cb0dfb8b28       coredns-66bc5c9577-w2245            kube-system
	406623e1d0127       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      7 minutes ago        Running             coredns                   1                   fda4cb2ab460e       coredns-66bc5c9577-hw99j            kube-system
	e3e00e2da8bd7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      7 minutes ago        Running             kindnet-cni               1                   b3c174d7d003c       kindnet-m2l5j                       kube-system
	1ab649bc08ab0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      7 minutes ago        Exited              storage-provisioner       2                   611ff54ac571a       storage-provisioner                 kube-system
	4f18d2c8cbb18       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      7 minutes ago        Running             kube-proxy                1                   5e76fe966d8bb       kube-proxy-q5vfv                    kube-system
	71e9ce78d6466       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70                                      8 minutes ago        Running             kube-vip                  0                   75d9a258d0378       kube-vip-ha-791576                  kube-system
	a18297fd12571       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      8 minutes ago        Running             kube-apiserver            1                   d2e111aee1d35       kube-apiserver-ha-791576            kube-system
	0e19b5bb45d9e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      8 minutes ago        Exited              kube-controller-manager   1                   065d40fa0cc23       kube-controller-manager-ha-791576   kube-system
	392beb226748f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      8 minutes ago        Running             etcd                      1                   d6f57a5f40b96       etcd-ha-791576                      kube-system
	a038e721d900d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      8 minutes ago        Running             kube-scheduler            1                   6a36f33b4c7e9       kube-scheduler-ha-791576            kube-system
	
	
	==> coredns [3335ad39bba28fdd293923b313dec13f1a33d55117eaf80083a781dff0d8bdea] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42808 - 26955 "HINFO IN 630864626443792637.4045400913318639804. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02501392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [406623e1d012777bc4fd0347ac8b3f005c55afa441ea4b81863c6c008ee30979] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47637 - 44081 "HINFO IN 8875301780668194042.4808208815551959978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019656625s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:55:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:54:59 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m56s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           8m55s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 8m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m20s (x8 over 8m20s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m20s (x8 over 8m20s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m20s (x8 over 8m20s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:55:11 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m37s                  kube-proxy       
	  Normal   Starting                 8m47s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Warning  CgroupV1                 9m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m30s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m29s (x8 over 9m30s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m29s (x8 over 9m30s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m29s (x8 over 9m30s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m55s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 8m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m16s (x8 over 8m16s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m16s (x8 over 8m16s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m16s (x8 over 8m16s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:46:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 02 Dec 2025 19:46:01 +0000   Tue, 02 Dec 2025 19:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8zbzj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-4tffm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m55s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m52s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           7m21s              node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             7m2s               node-controller  Node ha-791576-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [392beb226748f9eb08b097b707e9c3fae2ea843b47c447e75c2c16d866e678de] <==
	{"level":"info","ts":"2025-12-02T19:49:06.780208Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.809456Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"44eee1400a9a95d4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-02T19:49:06.809499Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877742Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:49:06.877902Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.314765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:35732","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.4:35732: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-02T19:55:09.320863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:35734","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T19:55:09.405698Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4579929246608719274 12593026477526642892)"}
	{"level":"info","ts":"2025-12-02T19:55:09.407853Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"44eee1400a9a95d4","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-12-02T19:55:09.407978Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.408260Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408319Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.408590Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408680Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.408873Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409036Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4","error":"context canceled"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409096Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"44eee1400a9a95d4","error":"failed to read 44eee1400a9a95d4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-12-02T19:55:09.409137Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.409283Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4","error":"http: read on closed response body"}
	{"level":"info","ts":"2025-12-02T19:55:09.409336Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409372Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409415Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"44eee1400a9a95d4"}
	{"level":"info","ts":"2025-12-02T19:55:09.409476Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.436994Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"44eee1400a9a95d4"}
	{"level":"warn","ts":"2025-12-02T19:55:09.444288Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"44eee1400a9a95d4"}
	
	
	==> kernel <==
	 19:55:18 up  1:37,  0 user,  load average: 0.97, 1.27, 1.23
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3e00e2da8bd7227823a5aa7d6e5e4ac4d0b3b6254164b8f98c55f9fe1e0a41f] <==
	I1202 19:54:42.312918       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:42.313067       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:42.313115       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:54:52.295209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:54:52.295374       1 main.go:301] handling current node
	I1202 19:54:52.295398       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:54:52.295406       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:54:52.295573       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:54:52.295595       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:54:52.295666       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:54:52.295678       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:02.294739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:55:02.294851       1 main.go:301] handling current node
	I1202 19:55:02.294904       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:55:02.294921       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:55:02.295068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1202 19:55:02.295112       1 main.go:324] Node ha-791576-m03 has CIDR [10.244.2.0/24] 
	I1202 19:55:02.295205       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:55:02.295219       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:12.294908       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 19:55:12.294960       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 19:55:12.295111       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 19:55:12.295123       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 19:55:12.295184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 19:55:12.295243       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a18297fd12571f4a199fa63e12cc54b364415200b0305e4f6031acc05cb7bde9] <==
	I1202 19:47:20.864636       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 19:47:20.866204       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 19:47:20.866305       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 19:47:20.885386       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 19:47:20.888287       1 aggregator.go:171] initial CRD sync complete...
	I1202 19:47:20.888313       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 19:47:20.888321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 19:47:20.888328       1 cache.go:39] Caches are synced for autoregister controller
	I1202 19:47:20.889012       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 19:47:20.889287       1 cache.go:39] Caches are synced for LocalAvailability controller
	W1202 19:47:20.904276       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1202 19:47:20.906209       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 19:47:20.916565       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1202 19:47:20.920922       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1202 19:47:20.934648       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 19:47:20.959819       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 19:47:20.961820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 19:47:20.967200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 19:47:20.968399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 19:47:21.496374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 19:47:21.505479       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1202 19:47:22.134237       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:47:27.016568       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:47:27.032930       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:47:27.190342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0ca58a409109c6cf93ecd9eb064e7f3091b3dd592f95be9877036c0d2bbfeb8d] <==
	I1202 19:47:26.656190       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576"
	I1202 19:47:26.656300       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m02"
	I1202 19:47:26.656402       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 19:47:26.660500       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 19:47:26.660768       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 19:47:26.663628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 19:47:26.674684       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1202 19:47:26.651025       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 19:47:26.678311       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 19:47:26.697935       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 19:47:26.687406       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 19:47:26.700695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:47:26.687552       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 19:47:26.687521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 19:47:26.687960       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 19:47:26.687543       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 19:47:26.816935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835171       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 19:47:26.835200       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 19:47:26.835207       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 19:47:31.218533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-791576-m04"
	I1202 19:53:17.454482       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:53:17.454338       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-xjn7v"
	E1202 19:55:10.010887       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-791576-m03\", UID:\"8a5dba8e-9b76-4e87-9053-ac95beaf6643\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-791576-m03\", UID:\"530f9ded-0cfe-4563-953d-e3f475e6bf0e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-791576-m03\" not found" logger="UnhandledError"
	E1202 19:55:10.032494       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-791576-m03\", UID:\"5c6202c1-f485-4e0e-8c3a-f878b287a56b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-791576-m03\", UID:\"530f9ded-0cfe-4563-953d-e3f475e6bf0e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-791576-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44] <==
	I1202 19:47:00.439112       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:47:01.716830       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:47:01.716879       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:01.723350       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:47:01.723493       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:47:01.723581       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 19:47:01.723592       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:21.737115       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [4f18d2c8cbb18519eff20cb6efdd106364f8f81f655e7d0e55cb89f551d5ed2f] <==
	I1202 19:47:22.149595       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:47:22.267082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:47:22.368012       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:47:22.368108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:47:22.368213       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:47:22.406247       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:47:22.406301       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:47:22.411880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:47:22.412231       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:47:22.412424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:22.415564       1 config.go:200] "Starting service config controller"
	I1202 19:47:22.415619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:47:22.415683       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:47:22.415727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:47:22.415771       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:47:22.415809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:47:22.419448       1 config.go:309] "Starting node config controller"
	I1202 19:47:22.419524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:47:22.419556       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:47:22.515835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:47:22.515918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:47:22.515839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a038e721d900d1d05f302d84321aed3efa00807fa84f377dff1bb59ed20d56ce] <==
	I1202 19:47:20.728119       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 19:47:20.728161       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:47:20.745935       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.746041       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:47:20.747549       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:47:20.749738       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 19:47:20.809434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 19:47:20.809565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:47:20.809638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:47:20.809646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:47:20.809729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:47:20.809794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:47:20.809851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:47:20.809908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 19:47:20.809962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:47:20.810015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:47:20.810111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 19:47:20.810263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:47:20.810386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:47:20.813845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:47:20.813926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:47:20.813972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:47:20.814068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1202 19:47:20.946679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:47:21 ha-791576 kubelet[793]: E1202 19:47:21.014395     793 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-791576\" already exists" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.422624     793 apiserver.go:52] "Watching apiserver"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.426135     793 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-791576" podUID="1848798a-e3e5-49f2-a138-7a169024e0bd"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.440465     793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449304     793 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.449333     793 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494517     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-xtables-lock\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494710     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a2e34ca-2f88-457c-8898-9cfbab53ca55-tmp\") pod \"storage-provisioner\" (UID: \"7a2e34ca-2f88-457c-8898-9cfbab53ca55\") " pod="kube-system/storage-provisioner"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494792     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-lib-modules\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.494985     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-xtables-lock\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495131     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/011527c2-0bbf-4dd9-a775-7bbd1a8647a4-lib-modules\") pod \"kube-proxy-q5vfv\" (UID: \"011527c2-0bbf-4dd9-a775-7bbd1a8647a4\") " pod="kube-system/kube-proxy-q5vfv"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.495164     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a984b329-2638-49d7-98e3-0c21cfed28c6-cni-cfg\") pod \"kindnet-m2l5j\" (UID: \"a984b329-2638-49d7-98e3-0c21cfed28c6\") " pod="kube-system/kindnet-m2l5j"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.517177     793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:47:21 ha-791576 kubelet[793]: I1202 19:47:21.605158     793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-791576" podStartSLOduration=0.605139116 podStartE2EDuration="605.139116ms" podCreationTimestamp="2025-12-02 19:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 19:47:21.587118107 +0000 UTC m=+23.288113583" watchObservedRunningTime="2025-12-02 19:47:21.605139116 +0000 UTC m=+23.306134592"
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.780422     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681 WatchSource:0}: Error finding container 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681: Status 404 returned error can't find the container with id 611ff54ac571a9220e0f6d89a1ababdc5f44ac44cba7cd642507da0974906681
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.810847     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d WatchSource:0}: Error finding container b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d: Status 404 returned error can't find the container with id b3c174d7d003c5fe90123aaea9b60bced8cb6e235487554632b1c2a0c821611d
	Dec 02 19:47:21 ha-791576 kubelet[793]: W1202 19:47:21.882017     793 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e WatchSource:0}: Error finding container fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e: Status 404 returned error can't find the container with id fda4cb2ab460e6c595eb6a6b3f800cb2f4544d625f88dd5fbce913345b9d293e
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.507633     793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1a35affb644c5a6d9375f3959ef470" path="/var/lib/kubelet/pods/de1a35affb644c5a6d9375f3959ef470/volumes"
	Dec 02 19:47:22 ha-791576 kubelet[793]: I1202 19:47:22.594773     793 scope.go:117] "RemoveContainer" containerID="0e19b5bb45d9e08dcf831061abb4639c972f0af3d50530839ae99761c1950e44"
	Dec 02 19:47:52 ha-791576 kubelet[793]: I1202 19:47:52.698574     793 scope.go:117] "RemoveContainer" containerID="1ab649bc08ab060742673f50eeb7c2a57ee5a4578e1a59eddd554c3ad6d7404e"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.432933     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.432988     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237" err="rpc error: code = NotFound desc = could not find container \"364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237\": container with ID starting with 364860b0d0047a62de399be8612f8fc78348512f85c1dcfe0ccf50ea5bce3237 not found: ID does not exist"
	Dec 02 19:47:58 ha-791576 kubelet[793]: E1202 19:47:58.433641     793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a"
	Dec 02 19:47:58 ha-791576 kubelet[793]: I1202 19:47:58.433699     793 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a" err="rpc error: code = NotFound desc = could not find container \"f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a\": container with ID starting with f693d603f15b47a9d6a41c43b6e8b5062d9040799068d9b1bc946e85035b8f9a not found: ID does not exist"
	Dec 02 19:53:17 ha-791576 kubelet[793]: I1202 19:53:17.758545     793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhfw\" (UniqueName: \"kubernetes.io/projected/9231dae8-fa3f-4719-aa0b-e2893cf7afe6-kube-api-access-gdhfw\") pod \"busybox-7b57f96db7-l5g8z\" (UID: \"9231dae8-fa3f-4719-aa0b-e2893cf7afe6\") " pod="default/busybox-7b57f96db7-l5g8z"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-k9bh8
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8
helpers_test.go:290: (dbg) kubectl --context ha-791576 describe pod busybox-7b57f96db7-k9bh8:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-k9bh8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fp5lt (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fp5lt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  2m2s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint(s), 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m3s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint(s), 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (374.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 19:56:45.851561    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:56:57.357633    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:58:08.922567    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:58:46.175688    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:45.851782    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m11.793688828s)

                                                
                                                
-- stdout --
	* [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-791576-m04" worker node in "ha-791576" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:55:44.177967   93254 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:44.178109   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178122   93254 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:44.178128   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178419   93254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:44.178766   93254 out.go:368] Setting JSON to false
	I1202 19:55:44.179556   93254 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5883,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:55:44.179622   93254 start.go:143] virtualization:  
	I1202 19:55:44.182617   93254 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:55:44.186436   93254 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:55:44.186585   93254 notify.go:221] Checking for updates...
	I1202 19:55:44.192062   93254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:55:44.194974   93254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:44.197803   93254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:55:44.200682   93254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:55:44.203721   93254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:55:44.206951   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:44.207525   93254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:55:44.231700   93254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:55:44.231811   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.301596   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.287047316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.301733   93254 docker.go:319] overlay module found
	I1202 19:55:44.304924   93254 out.go:179] * Using the docker driver based on existing profile
	I1202 19:55:44.307862   93254 start.go:309] selected driver: docker
	I1202 19:55:44.307884   93254 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kube
flow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.308026   93254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:55:44.308131   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.371573   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.362799023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.372011   93254 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:55:44.372042   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:44.372097   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:44.372154   93254 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.377185   93254 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:55:44.379977   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:44.382846   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:44.385821   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:44.385879   93254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:55:44.385893   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:44.385993   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:44.386008   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:44.386151   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.386369   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:44.405321   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:44.405352   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:44.405373   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:44.405404   93254 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:44.405469   93254 start.go:364] duration metric: took 41.304µs to acquireMachinesLock for "ha-791576"
	I1202 19:55:44.405492   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:44.405502   93254 fix.go:54] fixHost starting: 
	I1202 19:55:44.405802   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.422067   93254 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:55:44.422096   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:44.425385   93254 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:55:44.425482   93254 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:55:44.656773   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.678497   93254 kic.go:430] container "ha-791576" state is running.
	I1202 19:55:44.678860   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:44.708256   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.708493   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:44.708552   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:44.731511   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:44.731837   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:44.731849   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:44.733165   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:47.885197   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:47.885250   93254 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:55:47.885314   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:47.903491   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:47.903813   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:47.903827   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:55:48.069176   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:48.069254   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.089514   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.089877   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.089901   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:48.242008   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:48.242032   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:48.242057   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:48.242069   93254 provision.go:84] configureAuth start
	I1202 19:55:48.242132   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:48.261821   93254 provision.go:143] copyHostCerts
	I1202 19:55:48.261871   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.261931   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:48.261951   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.262038   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:48.262141   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262166   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:48.262174   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262211   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:48.262289   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262314   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:48.262323   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262355   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:48.262435   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:55:48.452060   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:48.452139   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:48.452177   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.470613   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:48.573192   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:48.573250   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:48.589521   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:48.589763   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:55:48.606218   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:48.606297   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:48.623387   93254 provision.go:87] duration metric: took 381.29482ms to configureAuth
	I1202 19:55:48.623419   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:48.623653   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:48.623765   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.640254   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.640566   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.640586   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:49.030725   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:49.030745   93254 machine.go:97] duration metric: took 4.32224289s to provisionDockerMachine
	I1202 19:55:49.030757   93254 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:55:49.030768   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:49.030827   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:49.030865   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.051519   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.153353   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:49.156583   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:49.156607   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:49.156618   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:49.156674   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:49.156758   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:49.156764   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:49.156861   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:49.164042   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:49.180380   93254 start.go:296] duration metric: took 149.593959ms for postStartSetup
	I1202 19:55:49.180465   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:49.180519   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.197329   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.298832   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:49.303554   93254 fix.go:56] duration metric: took 4.898044691s for fixHost
	I1202 19:55:49.303578   93254 start.go:83] releasing machines lock for "ha-791576", held for 4.898097178s
	I1202 19:55:49.303651   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:49.320407   93254 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:49.320456   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.320470   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:49.320533   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.338342   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.345505   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.524252   93254 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:49.530647   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:49.565296   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:49.569498   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:49.569577   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:49.577094   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:49.577167   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:49.577205   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:49.577256   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:49.592079   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:49.605549   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:49.605621   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:49.621023   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:49.635753   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:49.750982   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:49.859462   93254 docker.go:234] disabling docker service ...
	I1202 19:55:49.859565   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:49.874667   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:49.887012   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:50.007847   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:50.134338   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:50.146986   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:50.161229   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:50.161317   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.170383   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:55:50.170453   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.179542   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.188652   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.197399   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:50.205856   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.214897   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.223103   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.231783   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:50.238878   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:50.245749   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:50.382453   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:50.564448   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:50.564526   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:50.568176   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:55:50.568235   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:55:50.571563   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:50.595656   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:50.595739   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.625390   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.655103   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:50.658061   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:50.674479   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:50.678575   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.688260   93254 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:50.688998   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:50.689083   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.726565   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.726626   93254 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:50.726708   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.756058   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.756081   93254 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:50.756091   93254 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:50.756189   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:50.756269   93254 ssh_runner.go:195] Run: crio config
	I1202 19:55:50.831624   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:50.831657   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:50.831710   93254 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:50.831742   93254 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:50.831887   93254 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:50.831904   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:55:50.831959   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:55:50.843196   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:50.843290   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:55:50.843354   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:50.850587   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:50.850656   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:55:50.857765   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:55:50.869276   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:50.881241   93254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:55:50.893240   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:55:50.905823   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:50.909303   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.918750   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:51.026144   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:51.042322   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:55:51.042383   93254 certs.go:195] generating shared ca certs ...
	I1202 19:55:51.042413   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.042572   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:55:51.042673   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:55:51.042696   93254 certs.go:257] generating profile certs ...
	I1202 19:55:51.042790   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:55:51.042844   93254 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f
	I1202 19:55:51.042883   93254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1202 19:55:51.207706   93254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f ...
	I1202 19:55:51.207774   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f: {Name:mk0befc0b318cce17722eedc60197d074ef72403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208003   93254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f ...
	I1202 19:55:51.208041   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f: {Name:mk6747dc6a0e6b21e4d9bc0a0b21cc4e1f72108f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208176   93254 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:55:51.208351   93254 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:55:51.208521   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:55:51.208562   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:55:51.208598   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:55:51.208631   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:55:51.208669   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:55:51.208699   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:55:51.208731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:55:51.208772   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:55:51.208803   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:55:51.208876   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:55:51.208937   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:55:51.208962   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:55:51.209012   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:51.209063   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:51.209110   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:51.209189   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:51.209271   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.209343   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.209384   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.210038   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:51.231782   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:51.250385   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:51.267781   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:55:51.286345   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:55:51.304523   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:55:51.322173   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:51.340727   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:51.358222   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:55:51.376555   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:51.392531   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:55:51.409238   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:51.421079   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:55:51.427316   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:55:51.435537   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.438995   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.439062   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.479993   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:55:51.487626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:51.495524   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499393   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.539899   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:51.548401   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:55:51.556378   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559859   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559918   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.600611   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:55:51.608321   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:51.611874   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:55:51.656450   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:55:51.699650   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:55:51.748675   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:55:51.798307   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:55:51.891003   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:55:51.960070   93254 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:51.960253   93254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:51.960360   93254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:52.020134   93254 cri.go:89] found id: "7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068"
	I1202 19:55:52.020208   93254 cri.go:89] found id: "53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9"
	I1202 19:55:52.020237   93254 cri.go:89] found id: "9e7e710fc30aaba995500f37ffa3972d03427ad4b5096ea5e3f635761be6fe1e"
	I1202 19:55:52.020256   93254 cri.go:89] found id: "b0964e2af680e31e59bc41f16955d47d76026029392b1597b247a7226618e258"
	I1202 19:55:52.020292   93254 cri.go:89] found id: "935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29"
	I1202 19:55:52.020316   93254 cri.go:89] found id: ""
	I1202 19:55:52.020420   93254 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:55:52.039471   93254 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:55:52.039648   93254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:52.052041   93254 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:55:52.052113   93254 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:55:52.052202   93254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:55:52.067291   93254 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:52.067793   93254 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.067946   93254 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:55:52.068355   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.069044   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:55:52.069935   93254 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:55:52.070037   93254 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:55:52.070083   93254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:55:52.070105   93254 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:55:52.070125   93254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:55:52.070010   93254 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:55:52.070578   93254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:55:52.089251   93254 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:55:52.089343   93254 kubeadm.go:602] duration metric: took 37.210796ms to restartPrimaryControlPlane
	I1202 19:55:52.089369   93254 kubeadm.go:403] duration metric: took 129.308895ms to StartCluster
	I1202 19:55:52.089422   93254 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.089527   93254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.090263   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.090544   93254 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:52.090598   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:55:52.090630   93254 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:55:52.091558   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.096453   93254 out.go:179] * Enabled addons: 
	I1202 19:55:52.099512   93254 addons.go:530] duration metric: took 8.877075ms for enable addons: enabled=[]
	I1202 19:55:52.099607   93254 start.go:247] waiting for cluster config update ...
	I1202 19:55:52.099630   93254 start.go:256] writing updated cluster config ...
	I1202 19:55:52.102945   93254 out.go:203] 
	I1202 19:55:52.106144   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.106258   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.109518   93254 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:55:52.112289   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:52.115487   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:52.118244   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:52.118264   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:52.118378   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:52.118387   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:52.118504   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.118707   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:52.150292   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:52.150314   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:52.150328   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:52.150350   93254 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:52.150401   93254 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:55:52.150419   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:52.150424   93254 fix.go:54] fixHost starting: m02
	I1202 19:55:52.150685   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.190695   93254 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:55:52.190719   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:52.194176   93254 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:55:52.194252   93254 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:55:52.599976   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.629412   93254 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:55:52.629885   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:52.664048   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.664285   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:52.664350   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:52.688321   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:52.688636   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:52.688648   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:52.689286   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:55.971095   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:55.971155   93254 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:55:55.971238   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:55.998825   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:55.999132   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:55.999149   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:55:56.285260   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:56.285380   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:56.324784   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:56.325097   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:56.325112   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:56.574478   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:56.574546   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:56.574578   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:56.574609   93254 provision.go:84] configureAuth start
	I1202 19:55:56.574702   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:56.605527   93254 provision.go:143] copyHostCerts
	I1202 19:55:56.605564   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605607   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:56.605617   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605764   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:56.605858   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605875   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:56.605880   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605907   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:56.605945   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605961   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:56.605965   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605988   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:56.606032   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:55:57.020409   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:57.020550   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:57.020628   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.038510   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:57.153644   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:57.153716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:57.184300   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:57.184359   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:55:57.266970   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:57.267064   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:57.331675   93254 provision.go:87] duration metric: took 757.029391ms to configureAuth
	I1202 19:55:57.331740   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:57.331983   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:57.332101   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.363340   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:57.363649   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:57.363662   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:58.504594   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:58.504673   93254 machine.go:97] duration metric: took 5.840377716s to provisionDockerMachine
	I1202 19:55:58.504698   93254 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:55:58.504722   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:58.504818   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:58.504881   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.552759   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.683948   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:58.687504   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:58.687528   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:58.687538   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:58.687590   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:58.687661   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:58.687667   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:58.687766   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:58.696105   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:58.729078   93254 start.go:296] duration metric: took 224.353376ms for postStartSetup
	I1202 19:55:58.729200   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:58.729258   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.748281   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.865403   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:58.871596   93254 fix.go:56] duration metric: took 6.721165168s for fixHost
	I1202 19:55:58.871617   93254 start.go:83] releasing machines lock for "ha-791576-m02", held for 6.7212084s
	I1202 19:55:58.871682   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:58.902526   93254 out.go:179] * Found network options:
	I1202 19:55:58.905433   93254 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:55:58.908359   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:55:58.908394   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:55:58.908458   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:58.908500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.908758   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:58.908808   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.941876   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.957861   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:59.379469   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:59.393428   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:59.393549   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:59.436981   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:59.437054   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:59.437109   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:59.437185   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:59.476789   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:59.492965   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:59.493030   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:59.510203   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:59.535902   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:59.890794   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:56:00.391688   93254 docker.go:234] disabling docker service ...
	I1202 19:56:00.391868   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:56:00.454884   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:56:00.506073   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:56:00.797340   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:56:01.166082   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:56:01.219009   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:56:01.256352   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:56:01.256455   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.307607   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:56:01.307708   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.346124   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.369272   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.393260   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:56:01.408865   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.438945   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.451063   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.488074   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:56:01.499136   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:56:01.507846   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:56:01.747608   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:32.000346   93254 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.252704452s)
	I1202 19:57:32.000372   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:32.000423   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:32.004239   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:32.004296   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:32.007869   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:32.036443   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:32.036523   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.065233   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.100050   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:32.103063   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:32.106043   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:32.121822   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:32.126366   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:32.138121   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:32.138366   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:32.138687   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:32.155548   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:32.155827   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:57:32.155834   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:32.155849   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:32.155961   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:32.156000   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:32.156007   93254 certs.go:257] generating profile certs ...
	I1202 19:57:32.156076   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:57:32.156141   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.8b416d14
	I1202 19:57:32.156181   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:57:32.156189   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:32.156201   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:32.156212   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:32.156222   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:32.156232   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:57:32.156243   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:57:32.156253   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:57:32.156264   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:57:32.156310   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:32.156339   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:32.156347   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:32.156372   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:32.156396   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:32.156422   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:32.156466   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:32.156496   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.156509   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.156520   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.156574   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:57:32.173330   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:57:32.269964   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:57:32.273629   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:57:32.281594   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:57:32.284955   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:57:32.292668   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:57:32.296257   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:57:32.304405   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:57:32.307845   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:57:32.316416   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:57:32.319715   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:57:32.331425   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:57:32.335418   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:57:32.345158   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:32.362660   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:32.381060   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:32.399011   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:32.417547   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:57:32.436697   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:57:32.454716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:57:32.472049   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:57:32.488952   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:32.507493   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:32.525119   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:32.543594   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:57:32.556208   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:57:32.568883   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:57:32.582212   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:57:32.594098   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:57:32.606261   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:57:32.618196   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:57:32.631378   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:32.637197   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:32.645952   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.649933   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.650038   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.692551   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:32.700398   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:32.708435   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.711984   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.712047   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.752921   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:32.760626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:32.768641   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772345   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772443   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.817730   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:32.825349   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:32.829063   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:57:32.869702   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:57:32.910289   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:57:32.951408   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:57:32.991818   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:57:33.032586   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:57:33.073299   93254 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:57:33.073392   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:33.073421   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:57:33.073489   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:57:33.084964   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:57:33.085019   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:57:33.085079   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:33.092389   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:33.092504   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:57:33.099839   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:33.111954   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:33.124537   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:57:33.139421   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:33.144249   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:33.154311   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.286984   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.300875   93254 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:57:33.301346   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:33.304919   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:33.307970   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.441136   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.455239   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:33.455306   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:33.455557   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330869   93254 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:57:37.330905   93254 node_ready.go:38] duration metric: took 3.875318836s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330920   93254 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:57:37.330980   93254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:57:37.350335   93254 api_server.go:72] duration metric: took 4.049370544s to wait for apiserver process to appear ...
	I1202 19:57:37.350361   93254 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:57:37.350381   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.437921   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.437997   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:37.850509   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.877801   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.877836   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.351486   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.375050   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.375085   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.850665   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.878543   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.878572   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.351038   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.378413   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.378441   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.850846   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.864441   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.864468   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.350812   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.361521   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:40.361559   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.850824   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.864753   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:57:40.866306   93254 api_server.go:141] control plane version: v1.34.2
	I1202 19:57:40.866336   93254 api_server.go:131] duration metric: took 3.51596701s to wait for apiserver health ...
	I1202 19:57:40.866371   93254 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:57:40.881984   93254 system_pods.go:59] 26 kube-system pods found
	I1202 19:57:40.882074   93254 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882090   93254 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882098   93254 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.882107   93254 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.882112   93254 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.882116   93254 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.882146   93254 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.882164   93254 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.882169   93254 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.882175   93254 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.882183   93254 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.882192   93254 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.882207   93254 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.882228   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.882258   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.882267   93254 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.882271   93254 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.882280   93254 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.882288   93254 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.882291   93254 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.882295   93254 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.882298   93254 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.882302   93254 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.882306   93254 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.882325   93254 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.882337   93254 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.882356   93254 system_pods.go:74] duration metric: took 15.961542ms to wait for pod list to return data ...
	I1202 19:57:40.882368   93254 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:57:40.886711   93254 default_sa.go:45] found service account: "default"
	I1202 19:57:40.886765   93254 default_sa.go:55] duration metric: took 4.377498ms for default service account to be created ...
	I1202 19:57:40.886816   93254 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:57:40.896351   93254 system_pods.go:86] 26 kube-system pods found
	I1202 19:57:40.896402   93254 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896455   93254 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896471   93254 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.896477   93254 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.896488   93254 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.896493   93254 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.896517   93254 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.896529   93254 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.896547   93254 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.896561   93254 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.896567   93254 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.896577   93254 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.896584   93254 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.896589   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.896594   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.896605   93254 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.896635   93254 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.896647   93254 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.896651   93254 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.896655   93254 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.896660   93254 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.896669   93254 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.896714   93254 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.896731   93254 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.896736   93254 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.896740   93254 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.896767   93254 system_pods.go:126] duration metric: took 9.944455ms to wait for k8s-apps to be running ...
	I1202 19:57:40.896779   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:40.896851   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:40.912940   93254 system_svc.go:56] duration metric: took 16.146284ms WaitForService to wait for kubelet
	I1202 19:57:40.912971   93254 kubeadm.go:587] duration metric: took 7.612010896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:40.913011   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:40.922663   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922709   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922747   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922761   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922765   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922770   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922782   93254 node_conditions.go:105] duration metric: took 9.75895ms to run NodePressure ...
	I1202 19:57:40.922797   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:40.922840   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:40.926963   93254 out.go:203] 
	I1202 19:57:40.930189   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:40.930349   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.933758   93254 out.go:179] * Starting "ha-791576-m04" worker node in "ha-791576" cluster
	I1202 19:57:40.937496   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:57:40.940562   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:57:40.944509   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:57:40.944573   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:57:40.944591   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:57:40.944689   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:57:40.944700   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:57:40.944847   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.980485   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:57:40.980503   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:57:40.980516   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:57:40.980539   93254 start.go:360] acquireMachinesLock for ha-791576-m04: {Name:mkf6d085e6ffaf9b8d3c89207d22561aa64cc068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:57:40.980591   93254 start.go:364] duration metric: took 37.824µs to acquireMachinesLock for "ha-791576-m04"
	I1202 19:57:40.980609   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:57:40.980616   93254 fix.go:54] fixHost starting: m04
	I1202 19:57:40.980868   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.009962   93254 fix.go:112] recreateIfNeeded on ha-791576-m04: state=Stopped err=<nil>
	W1202 19:57:41.009990   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:57:41.013529   93254 out.go:252] * Restarting existing docker container for "ha-791576-m04" ...
	I1202 19:57:41.013708   93254 cli_runner.go:164] Run: docker start ha-791576-m04
	I1202 19:57:41.349696   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.385329   93254 kic.go:430] container "ha-791576-m04" state is running.
	I1202 19:57:41.385673   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:41.416072   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:41.416305   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:57:41.416360   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:41.450379   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:41.450693   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:41.450702   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:57:41.451334   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:57:44.613206   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.613228   93254 ubuntu.go:182] provisioning hostname "ha-791576-m04"
	I1202 19:57:44.613296   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.632442   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.632744   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.632755   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m04 && echo "ha-791576-m04" | sudo tee /etc/hostname
	I1202 19:57:44.799185   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.799313   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.822391   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.822698   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.822720   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:57:44.979513   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:57:44.979597   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:57:44.979629   93254 ubuntu.go:190] setting up certificates
	I1202 19:57:44.979671   93254 provision.go:84] configureAuth start
	I1202 19:57:44.979758   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:45.000651   93254 provision.go:143] copyHostCerts
	I1202 19:57:45.000689   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000721   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:57:45.000728   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000802   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:57:45.001053   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001076   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:57:45.001081   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001115   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:57:45.001161   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001176   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:57:45.001180   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001205   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:57:45.001250   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m04 san=[127.0.0.1 192.168.49.5 ha-791576-m04 localhost minikube]
	I1202 19:57:45.318146   93254 provision.go:177] copyRemoteCerts
	I1202 19:57:45.318219   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:57:45.318283   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.341445   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:45.449731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:57:45.449820   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:57:45.472182   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:57:45.472243   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:57:45.492286   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:57:45.492350   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:57:45.510812   93254 provision.go:87] duration metric: took 531.109583ms to configureAuth
	I1202 19:57:45.510841   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:57:45.511124   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:45.511270   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.531424   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:45.532066   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:45.532093   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:57:45.884616   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:57:45.884638   93254 machine.go:97] duration metric: took 4.468325015s to provisionDockerMachine
	I1202 19:57:45.884650   93254 start.go:293] postStartSetup for "ha-791576-m04" (driver="docker")
	I1202 19:57:45.884699   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:57:45.884775   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:57:45.884823   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.903688   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.015544   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:57:46.019398   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:57:46.019427   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:57:46.019438   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:57:46.019497   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:57:46.019580   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:57:46.019594   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:57:46.019695   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:57:46.027313   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:46.046534   93254 start.go:296] duration metric: took 161.868987ms for postStartSetup
	I1202 19:57:46.046614   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:57:46.046664   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.064651   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.170656   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:57:46.175466   93254 fix.go:56] duration metric: took 5.194844037s for fixHost
	I1202 19:57:46.175488   93254 start.go:83] releasing machines lock for "ha-791576-m04", held for 5.194888303s
	I1202 19:57:46.175556   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:46.195693   93254 out.go:179] * Found network options:
	I1202 19:57:46.198432   93254 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:57:46.201295   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201328   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201354   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201369   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:57:46.201448   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:57:46.201500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.201866   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:57:46.201941   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.219848   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.241958   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.425303   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:57:46.430326   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:57:46.430443   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:57:46.438789   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:57:46.438867   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:57:46.438915   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:57:46.439004   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:57:46.456655   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:57:46.471141   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:57:46.471238   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:57:46.496759   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:57:46.510741   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:57:46.633508   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:57:46.765301   93254 docker.go:234] disabling docker service ...
	I1202 19:57:46.765415   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:57:46.780559   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:57:46.793987   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:57:46.911887   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:57:47.041997   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:57:47.056582   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:57:47.071233   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:57:47.071325   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.080316   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:57:47.080415   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.090821   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.100556   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.110245   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:57:47.121207   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.131994   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.141137   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.150939   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:57:47.158669   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:57:47.166378   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:47.292693   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:47.494962   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:47.495081   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:47.499951   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:47.500031   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:47.503579   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:47.538410   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:47.538551   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.570927   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.607710   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:47.610516   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:47.613449   93254 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:57:47.616291   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:47.633448   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:47.637365   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:47.649386   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:47.649615   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:47.649896   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:47.667951   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:47.668231   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.5
	I1202 19:57:47.668239   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:47.668253   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:47.668379   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:47.668418   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:47.668429   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:47.668440   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:47.668450   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:47.668462   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:47.668518   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:47.668548   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:47.668557   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:47.668584   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:47.668607   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:47.668629   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:47.668673   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:47.668703   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.668715   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.668726   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.668743   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:47.691818   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:47.709295   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:47.728849   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:47.751519   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:47.769113   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:47.789898   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:47.811416   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:47.817999   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:47.826285   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.829982   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.830054   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.872757   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:47.880633   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:47.889438   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893421   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.934334   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:47.942513   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:47.950820   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955232   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955298   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:48.000169   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:48.008314   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:48.014820   93254 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:57:48.014881   93254 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1202 19:57:48.014972   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:48.015054   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:48.026264   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:48.026381   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1202 19:57:48.034605   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:48.048065   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:48.063803   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:48.067995   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:48.077597   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.208286   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.223948   93254 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1202 19:57:48.224395   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:48.229649   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:48.232645   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.363476   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.379483   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:48.379562   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:48.379785   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m04" to be "Ready" ...
	W1202 19:57:50.383622   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:52.383990   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:54.883829   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	I1202 19:57:55.884383   93254 node_ready.go:49] node "ha-791576-m04" is "Ready"
	I1202 19:57:55.884416   93254 node_ready.go:38] duration metric: took 7.504611892s for node "ha-791576-m04" to be "Ready" ...
	I1202 19:57:55.884429   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:55.884499   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:55.899211   93254 system_svc.go:56] duration metric: took 14.774003ms WaitForService to wait for kubelet
	I1202 19:57:55.899239   93254 kubeadm.go:587] duration metric: took 7.675249996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:55.899279   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:55.902757   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902783   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902794   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902800   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902805   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902809   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902813   93254 node_conditions.go:105] duration metric: took 3.530143ms to run NodePressure ...
	I1202 19:57:55.902825   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:55.902850   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:55.903157   93254 ssh_runner.go:195] Run: rm -f paused
	I1202 19:57:55.907062   93254 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:57:55.907561   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:57:55.926185   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:57:57.936730   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:00.437098   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:02.936225   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:04.937647   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:07.433127   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:09.433300   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:11.439409   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:13.936991   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:16.432700   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:18.432998   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	I1202 19:58:19.936601   93254 pod_ready.go:94] pod "coredns-66bc5c9577-hw99j" is "Ready"
	I1202 19:58:19.936627   93254 pod_ready.go:86] duration metric: took 24.01037278s for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.936639   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.946385   93254 pod_ready.go:94] pod "coredns-66bc5c9577-w2245" is "Ready"
	I1202 19:58:19.946408   93254 pod_ready.go:86] duration metric: took 9.76284ms for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.950499   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967558   93254 pod_ready.go:94] pod "etcd-ha-791576" is "Ready"
	I1202 19:58:19.967580   93254 pod_ready.go:86] duration metric: took 17.043001ms for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967589   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983217   93254 pod_ready.go:94] pod "etcd-ha-791576-m02" is "Ready"
	I1202 19:58:19.983312   93254 pod_ready.go:86] duration metric: took 15.715518ms for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983336   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.126953   93254 request.go:683] "Waited before sending request" delay="135.197879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:20.129983   93254 pod_ready.go:99] pod "etcd-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "etcd-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:20.130062   93254 pod_ready.go:86] duration metric: took 146.705626ms for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.327487   93254 request.go:683] "Waited before sending request" delay="197.274849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1202 19:58:20.331946   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.527354   93254 request.go:683] "Waited before sending request" delay="195.301984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576"
	I1202 19:58:20.726783   93254 request.go:683] "Waited before sending request" delay="195.232619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:20.729884   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576" is "Ready"
	I1202 19:58:20.729911   93254 pod_ready.go:86] duration metric: took 397.935401ms for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.729921   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.927333   93254 request.go:683] "Waited before sending request" delay="197.344927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m02"
	I1202 19:58:21.127530   93254 request.go:683] "Waited before sending request" delay="195.226515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m02"
	I1202 19:58:21.134380   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576-m02" is "Ready"
	I1202 19:58:21.134412   93254 pod_ready.go:86] duration metric: took 404.483988ms for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.134423   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.326813   93254 request.go:683] "Waited before sending request" delay="192.320431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m03"
	I1202 19:58:21.527439   93254 request.go:683] "Waited before sending request" delay="197.329437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:21.533492   93254 pod_ready.go:99] pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "kube-apiserver-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:21.533559   93254 pod_ready.go:86] duration metric: took 399.129563ms for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.727056   93254 request.go:683] "Waited before sending request" delay="193.360691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1202 19:58:21.730488   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.926811   93254 request.go:683] "Waited before sending request" delay="196.233661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.127186   93254 request.go:683] "Waited before sending request" delay="194.445087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.326846   93254 request.go:683] "Waited before sending request" delay="96.137701ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.527173   93254 request.go:683] "Waited before sending request" delay="197.340316ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.927176   93254 request.go:683] "Waited before sending request" delay="193.337028ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:23.326849   93254 request.go:683] "Waited before sending request" delay="93.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	W1202 19:58:23.736689   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:25.737056   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:27.748280   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:30.236783   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:32.236980   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:34.736941   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:37.237158   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	I1202 19:58:38.237174   93254 pod_ready.go:94] pod "kube-controller-manager-ha-791576" is "Ready"
	I1202 19:58:38.237206   93254 pod_ready.go:86] duration metric: took 16.506691586s for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:38.237217   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:58:40.244619   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:42.254491   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:44.742876   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:46.743816   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:49.244146   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:51.244844   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:53.742978   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:55.743809   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:58.244614   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:00.270137   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:02.744270   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:04.744321   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:07.244122   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:09.253242   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:11.744525   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:14.244287   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:16.743480   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:18.743527   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:20.744157   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:22.744418   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:25.244307   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:27.244638   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:29.747394   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:32.243699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:34.244795   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:36.744345   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:39.244487   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:41.743981   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:44.244128   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:46.743606   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:49.243339   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:51.244231   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:53.743102   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:56.242882   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:58.243182   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:00.266823   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:02.745097   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:05.243680   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:07.244023   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:09.743730   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:12.243875   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:14.744016   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:17.243913   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:19.244051   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:21.244857   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:23.743729   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:25.744255   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:27.744400   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:30.244688   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:32.247066   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:34.743523   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:37.244239   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:39.743699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:41.744670   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:44.244162   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:46.743513   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:49.245392   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:51.744149   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:54.248947   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:56.743993   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:59.244304   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:01.246223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:03.744505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:06.243892   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:08.743156   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:10.743380   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:12.744647   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:15.244219   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:17.744350   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:20.243654   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:22.245725   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:24.247107   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:26.743319   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:28.743362   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:30.744276   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:33.243318   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:35.245433   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:37.743505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:39.745223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:42.248295   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:44.742894   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:46.744704   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:49.243457   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:51.244130   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:53.745924   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	I1202 20:01:55.907841   93254 pod_ready.go:86] duration metric: took 3m17.670596483s for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:01:55.907902   93254 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1202 20:01:55.907923   93254 pod_ready.go:40] duration metric: took 4m0.000821875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:01:55.911296   93254 out.go:203] 
	W1202 20:01:55.914260   93254 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1202 20:01:55.917058   93254 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:55:44.458015606Z",
	            "FinishedAt": "2025-12-02T19:55:43.73005975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "751177d5ee464382bdbbbb72de4fb526573054bfa543b68ed932cd0c1d287957",
	            "SandboxKey": "/var/run/docker/netns/751177d5ee46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:0b:05:fd:a7:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "f86c1b624622b29b058cdcb9ce2cd5d942bc8d95518744c77b2a01273b6d217e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
E1202 20:01:57.357028    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.363950464s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:55:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:55:44.177967   93254 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:44.178109   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178122   93254 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:44.178128   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178419   93254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:44.178766   93254 out.go:368] Setting JSON to false
	I1202 19:55:44.179556   93254 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5883,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:55:44.179622   93254 start.go:143] virtualization:  
	I1202 19:55:44.182617   93254 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:55:44.186436   93254 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:55:44.186585   93254 notify.go:221] Checking for updates...
	I1202 19:55:44.192062   93254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:55:44.194974   93254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:44.197803   93254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:55:44.200682   93254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:55:44.203721   93254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:55:44.206951   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:44.207525   93254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:55:44.231700   93254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:55:44.231811   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.301596   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.287047316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.301733   93254 docker.go:319] overlay module found
	I1202 19:55:44.304924   93254 out.go:179] * Using the docker driver based on existing profile
	I1202 19:55:44.307862   93254 start.go:309] selected driver: docker
	I1202 19:55:44.307884   93254 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kube
flow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.308026   93254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:55:44.308131   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.371573   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.362799023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.372011   93254 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:55:44.372042   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:44.372097   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:44.372154   93254 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.377185   93254 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:55:44.379977   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:44.382846   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:44.385821   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:44.385879   93254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:55:44.385893   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:44.385993   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:44.386008   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:44.386151   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.386369   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:44.405321   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:44.405352   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:44.405373   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:44.405404   93254 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:44.405469   93254 start.go:364] duration metric: took 41.304µs to acquireMachinesLock for "ha-791576"
	I1202 19:55:44.405492   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:44.405502   93254 fix.go:54] fixHost starting: 
	I1202 19:55:44.405802   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.422067   93254 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:55:44.422096   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:44.425385   93254 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:55:44.425482   93254 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:55:44.656773   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.678497   93254 kic.go:430] container "ha-791576" state is running.
	I1202 19:55:44.678860   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:44.708256   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.708493   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:44.708552   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:44.731511   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:44.731837   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:44.731849   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:44.733165   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:47.885197   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:47.885250   93254 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:55:47.885314   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:47.903491   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:47.903813   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:47.903827   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:55:48.069176   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:48.069254   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.089514   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.089877   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.089901   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:48.242008   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:48.242032   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:48.242057   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:48.242069   93254 provision.go:84] configureAuth start
	I1202 19:55:48.242132   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:48.261821   93254 provision.go:143] copyHostCerts
	I1202 19:55:48.261871   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.261931   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:48.261951   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.262038   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:48.262141   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262166   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:48.262174   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262211   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:48.262289   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262314   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:48.262323   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262355   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:48.262435   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:55:48.452060   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:48.452139   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:48.452177   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.470613   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:48.573192   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:48.573250   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:48.589521   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:48.589763   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:55:48.606218   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:48.606297   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:48.623387   93254 provision.go:87] duration metric: took 381.29482ms to configureAuth
	I1202 19:55:48.623419   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:48.623653   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:48.623765   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.640254   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.640566   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.640586   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:49.030725   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:49.030745   93254 machine.go:97] duration metric: took 4.32224289s to provisionDockerMachine
	I1202 19:55:49.030757   93254 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:55:49.030768   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:49.030827   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:49.030865   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.051519   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.153353   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:49.156583   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:49.156607   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:49.156618   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:49.156674   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:49.156758   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:49.156764   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:49.156861   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:49.164042   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:49.180380   93254 start.go:296] duration metric: took 149.593959ms for postStartSetup
	I1202 19:55:49.180465   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:49.180519   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.197329   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.298832   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:49.303554   93254 fix.go:56] duration metric: took 4.898044691s for fixHost
	I1202 19:55:49.303578   93254 start.go:83] releasing machines lock for "ha-791576", held for 4.898097178s
	I1202 19:55:49.303651   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:49.320407   93254 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:49.320456   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.320470   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:49.320533   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.338342   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.345505   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.524252   93254 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:49.530647   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:49.565296   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:49.569498   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:49.569577   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:49.577094   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:49.577167   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:49.577205   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:49.577256   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:49.592079   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:49.605549   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:49.605621   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:49.621023   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:49.635753   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:49.750982   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:49.859462   93254 docker.go:234] disabling docker service ...
	I1202 19:55:49.859565   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:49.874667   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:49.887012   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:50.007847   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:50.134338   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:50.146986   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:50.161229   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:50.161317   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.170383   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:55:50.170453   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.179542   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.188652   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.197399   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:50.205856   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.214897   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.223103   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.231783   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:50.238878   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:50.245749   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:50.382453   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:50.564448   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:50.564526   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:50.568176   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:55:50.568235   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:55:50.571563   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:50.595656   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:50.595739   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.625390   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.655103   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:50.658061   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:50.674479   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:50.678575   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.688260   93254 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:50.688998   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:50.689083   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.726565   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.726626   93254 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:50.726708   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.756058   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.756081   93254 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:50.756091   93254 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:50.756189   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:50.756269   93254 ssh_runner.go:195] Run: crio config
	I1202 19:55:50.831624   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:50.831657   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:50.831710   93254 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:50.831742   93254 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:50.831887   93254 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:50.831904   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:55:50.831959   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:55:50.843196   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:50.843290   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:55:50.843354   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:50.850587   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:50.850656   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:55:50.857765   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:55:50.869276   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:50.881241   93254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:55:50.893240   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:55:50.905823   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:50.909303   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.918750   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:51.026144   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:51.042322   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:55:51.042383   93254 certs.go:195] generating shared ca certs ...
	I1202 19:55:51.042413   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.042572   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:55:51.042673   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:55:51.042696   93254 certs.go:257] generating profile certs ...
	I1202 19:55:51.042790   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:55:51.042844   93254 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f
	I1202 19:55:51.042883   93254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1202 19:55:51.207706   93254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f ...
	I1202 19:55:51.207774   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f: {Name:mk0befc0b318cce17722eedc60197d074ef72403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208003   93254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f ...
	I1202 19:55:51.208041   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f: {Name:mk6747dc6a0e6b21e4d9bc0a0b21cc4e1f72108f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208176   93254 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:55:51.208351   93254 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:55:51.208521   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:55:51.208562   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:55:51.208598   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:55:51.208631   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:55:51.208669   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:55:51.208699   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:55:51.208731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:55:51.208772   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:55:51.208803   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:55:51.208876   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:55:51.208937   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:55:51.208962   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:55:51.209012   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:51.209063   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:51.209110   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:51.209189   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:51.209271   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.209343   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.209384   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.210038   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:51.231782   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:51.250385   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:51.267781   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:55:51.286345   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:55:51.304523   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:55:51.322173   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:51.340727   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:51.358222   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:55:51.376555   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:51.392531   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:55:51.409238   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:51.421079   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:55:51.427316   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:55:51.435537   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.438995   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.439062   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.479993   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:55:51.487626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:51.495524   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499393   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.539899   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:51.548401   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:55:51.556378   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559859   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559918   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.600611   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:55:51.608321   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:51.611874   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:55:51.656450   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:55:51.699650   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:55:51.748675   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:55:51.798307   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:55:51.891003   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:55:51.960070   93254 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:51.960253   93254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:51.960360   93254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:52.020134   93254 cri.go:89] found id: "7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068"
	I1202 19:55:52.020208   93254 cri.go:89] found id: "53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9"
	I1202 19:55:52.020237   93254 cri.go:89] found id: "9e7e710fc30aaba995500f37ffa3972d03427ad4b5096ea5e3f635761be6fe1e"
	I1202 19:55:52.020256   93254 cri.go:89] found id: "b0964e2af680e31e59bc41f16955d47d76026029392b1597b247a7226618e258"
	I1202 19:55:52.020292   93254 cri.go:89] found id: "935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29"
	I1202 19:55:52.020316   93254 cri.go:89] found id: ""
	I1202 19:55:52.020420   93254 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:55:52.039471   93254 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:55:52.039648   93254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:52.052041   93254 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:55:52.052113   93254 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:55:52.052202   93254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:55:52.067291   93254 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:52.067793   93254 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.067946   93254 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:55:52.068355   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.069044   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:55:52.069935   93254 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:55:52.070037   93254 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:55:52.070083   93254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:55:52.070105   93254 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:55:52.070125   93254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:55:52.070010   93254 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:55:52.070578   93254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:55:52.089251   93254 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:55:52.089343   93254 kubeadm.go:602] duration metric: took 37.210796ms to restartPrimaryControlPlane
	I1202 19:55:52.089369   93254 kubeadm.go:403] duration metric: took 129.308895ms to StartCluster
	I1202 19:55:52.089422   93254 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.089527   93254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.090263   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.090544   93254 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:52.090598   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:55:52.090630   93254 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:55:52.091558   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.096453   93254 out.go:179] * Enabled addons: 
	I1202 19:55:52.099512   93254 addons.go:530] duration metric: took 8.877075ms for enable addons: enabled=[]
	I1202 19:55:52.099607   93254 start.go:247] waiting for cluster config update ...
	I1202 19:55:52.099630   93254 start.go:256] writing updated cluster config ...
	I1202 19:55:52.102945   93254 out.go:203] 
	I1202 19:55:52.106144   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.106258   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.109518   93254 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:55:52.112289   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:52.115487   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:52.118244   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:52.118264   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:52.118378   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:52.118387   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:52.118504   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.118707   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:52.150292   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:52.150314   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:52.150328   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:52.150350   93254 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:52.150401   93254 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:55:52.150419   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:52.150424   93254 fix.go:54] fixHost starting: m02
	I1202 19:55:52.150685   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.190695   93254 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:55:52.190719   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:52.194176   93254 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:55:52.194252   93254 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:55:52.599976   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.629412   93254 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:55:52.629885   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:52.664048   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.664285   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:52.664350   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:52.688321   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:52.688636   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:52.688648   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:52.689286   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:55.971095   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:55.971155   93254 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:55:55.971238   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:55.998825   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:55.999132   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:55.999149   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:55:56.285260   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:56.285380   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:56.324784   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:56.325097   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:56.325112   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:56.574478   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:56.574546   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:56.574578   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:56.574609   93254 provision.go:84] configureAuth start
	I1202 19:55:56.574702   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:56.605527   93254 provision.go:143] copyHostCerts
	I1202 19:55:56.605564   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605607   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:56.605617   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605764   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:56.605858   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605875   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:56.605880   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605907   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:56.605945   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605961   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:56.605965   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605988   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:56.606032   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:55:57.020409   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:57.020550   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:57.020628   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.038510   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:57.153644   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:57.153716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:57.184300   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:57.184359   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:55:57.266970   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:57.267064   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:57.331675   93254 provision.go:87] duration metric: took 757.029391ms to configureAuth
	I1202 19:55:57.331740   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:57.331983   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:57.332101   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.363340   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:57.363649   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:57.363662   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:58.504594   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:58.504673   93254 machine.go:97] duration metric: took 5.840377716s to provisionDockerMachine
	I1202 19:55:58.504698   93254 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:55:58.504722   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:58.504818   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:58.504881   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.552759   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.683948   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:58.687504   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:58.687528   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:58.687538   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:58.687590   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:58.687661   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:58.687667   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:58.687766   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:58.696105   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:58.729078   93254 start.go:296] duration metric: took 224.353376ms for postStartSetup
	I1202 19:55:58.729200   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:58.729258   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.748281   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.865403   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:58.871596   93254 fix.go:56] duration metric: took 6.721165168s for fixHost
	I1202 19:55:58.871617   93254 start.go:83] releasing machines lock for "ha-791576-m02", held for 6.7212084s
	I1202 19:55:58.871682   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:58.902526   93254 out.go:179] * Found network options:
	I1202 19:55:58.905433   93254 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:55:58.908359   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:55:58.908394   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:55:58.908458   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:58.908500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.908758   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:58.908808   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.941876   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.957861   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:59.379469   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:59.393428   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:59.393549   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:59.436981   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:59.437054   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:59.437109   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:59.437185   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:59.476789   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:59.492965   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:59.493030   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:59.510203   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:59.535902   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:59.890794   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:56:00.391688   93254 docker.go:234] disabling docker service ...
	I1202 19:56:00.391868   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:56:00.454884   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:56:00.506073   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:56:00.797340   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:56:01.166082   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:56:01.219009   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:56:01.256352   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:56:01.256455   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.307607   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:56:01.307708   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.346124   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.369272   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.393260   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:56:01.408865   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.438945   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.451063   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.488074   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:56:01.499136   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:56:01.507846   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:56:01.747608   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:32.000346   93254 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.252704452s)
	I1202 19:57:32.000372   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:32.000423   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:32.004239   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:32.004296   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:32.007869   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:32.036443   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:32.036523   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.065233   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.100050   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:32.103063   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:32.106043   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:32.121822   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:32.126366   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:32.138121   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:32.138366   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:32.138687   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:32.155548   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:32.155827   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:57:32.155834   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:32.155849   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:32.155961   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:32.156000   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:32.156007   93254 certs.go:257] generating profile certs ...
	I1202 19:57:32.156076   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:57:32.156141   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.8b416d14
	I1202 19:57:32.156181   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:57:32.156189   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:32.156201   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:32.156212   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:32.156222   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:32.156232   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:57:32.156243   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:57:32.156253   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:57:32.156264   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:57:32.156310   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:32.156339   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:32.156347   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:32.156372   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:32.156396   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:32.156422   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:32.156466   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:32.156496   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.156509   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.156520   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.156574   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:57:32.173330   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:57:32.269964   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:57:32.273629   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:57:32.281594   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:57:32.284955   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:57:32.292668   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:57:32.296257   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:57:32.304405   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:57:32.307845   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:57:32.316416   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:57:32.319715   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:57:32.331425   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:57:32.335418   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:57:32.345158   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:32.362660   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:32.381060   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:32.399011   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:32.417547   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:57:32.436697   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:57:32.454716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:57:32.472049   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:57:32.488952   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:32.507493   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:32.525119   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:32.543594   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:57:32.556208   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:57:32.568883   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:57:32.582212   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:57:32.594098   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:57:32.606261   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:57:32.618196   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:57:32.631378   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:32.637197   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:32.645952   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.649933   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.650038   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.692551   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:32.700398   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:32.708435   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.711984   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.712047   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.752921   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:32.760626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:32.768641   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772345   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772443   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.817730   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:32.825349   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:32.829063   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:57:32.869702   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:57:32.910289   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:57:32.951408   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:57:32.991818   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:57:33.032586   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:57:33.073299   93254 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:57:33.073392   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:33.073421   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:57:33.073489   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:57:33.084964   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:57:33.085019   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:57:33.085079   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:33.092389   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:33.092504   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:57:33.099839   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:33.111954   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:33.124537   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:57:33.139421   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:33.144249   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:33.154311   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.286984   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.300875   93254 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:57:33.301346   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:33.304919   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:33.307970   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.441136   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.455239   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:33.455306   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:33.455557   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330869   93254 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:57:37.330905   93254 node_ready.go:38] duration metric: took 3.875318836s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330920   93254 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:57:37.330980   93254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:57:37.350335   93254 api_server.go:72] duration metric: took 4.049370544s to wait for apiserver process to appear ...
	I1202 19:57:37.350361   93254 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:57:37.350381   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.437921   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.437997   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:37.850509   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.877801   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.877836   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.351486   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.375050   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.375085   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.850665   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.878543   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.878572   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.351038   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.378413   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.378441   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.850846   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.864441   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.864468   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.350812   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.361521   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:40.361559   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.850824   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.864753   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:57:40.866306   93254 api_server.go:141] control plane version: v1.34.2
	I1202 19:57:40.866336   93254 api_server.go:131] duration metric: took 3.51596701s to wait for apiserver health ...
	I1202 19:57:40.866371   93254 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:57:40.881984   93254 system_pods.go:59] 26 kube-system pods found
	I1202 19:57:40.882074   93254 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882090   93254 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882098   93254 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.882107   93254 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.882112   93254 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.882116   93254 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.882146   93254 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.882164   93254 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.882169   93254 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.882175   93254 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.882183   93254 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.882192   93254 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.882207   93254 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.882228   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.882258   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.882267   93254 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.882271   93254 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.882280   93254 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.882288   93254 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.882291   93254 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.882295   93254 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.882298   93254 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.882302   93254 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.882306   93254 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.882325   93254 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.882337   93254 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.882356   93254 system_pods.go:74] duration metric: took 15.961542ms to wait for pod list to return data ...
	I1202 19:57:40.882368   93254 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:57:40.886711   93254 default_sa.go:45] found service account: "default"
	I1202 19:57:40.886765   93254 default_sa.go:55] duration metric: took 4.377498ms for default service account to be created ...
	I1202 19:57:40.886816   93254 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:57:40.896351   93254 system_pods.go:86] 26 kube-system pods found
	I1202 19:57:40.896402   93254 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896455   93254 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896471   93254 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.896477   93254 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.896488   93254 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.896493   93254 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.896517   93254 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.896529   93254 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.896547   93254 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.896561   93254 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.896567   93254 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.896577   93254 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.896584   93254 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.896589   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.896594   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.896605   93254 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.896635   93254 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.896647   93254 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.896651   93254 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.896655   93254 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.896660   93254 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.896669   93254 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.896714   93254 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.896731   93254 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.896736   93254 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.896740   93254 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.896767   93254 system_pods.go:126] duration metric: took 9.944455ms to wait for k8s-apps to be running ...
	I1202 19:57:40.896779   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:40.896851   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:40.912940   93254 system_svc.go:56] duration metric: took 16.146284ms WaitForService to wait for kubelet
	I1202 19:57:40.912971   93254 kubeadm.go:587] duration metric: took 7.612010896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:40.913011   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:40.922663   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922709   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922747   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922761   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922765   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922770   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922782   93254 node_conditions.go:105] duration metric: took 9.75895ms to run NodePressure ...
	I1202 19:57:40.922797   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:40.922840   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:40.926963   93254 out.go:203] 
	I1202 19:57:40.930189   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:40.930349   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.933758   93254 out.go:179] * Starting "ha-791576-m04" worker node in "ha-791576" cluster
	I1202 19:57:40.937496   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:57:40.940562   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:57:40.944509   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:57:40.944573   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:57:40.944591   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:57:40.944689   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:57:40.944700   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:57:40.944847   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.980485   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:57:40.980503   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:57:40.980516   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:57:40.980539   93254 start.go:360] acquireMachinesLock for ha-791576-m04: {Name:mkf6d085e6ffaf9b8d3c89207d22561aa64cc068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:57:40.980591   93254 start.go:364] duration metric: took 37.824µs to acquireMachinesLock for "ha-791576-m04"
	I1202 19:57:40.980609   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:57:40.980616   93254 fix.go:54] fixHost starting: m04
	I1202 19:57:40.980868   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.009962   93254 fix.go:112] recreateIfNeeded on ha-791576-m04: state=Stopped err=<nil>
	W1202 19:57:41.009990   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:57:41.013529   93254 out.go:252] * Restarting existing docker container for "ha-791576-m04" ...
	I1202 19:57:41.013708   93254 cli_runner.go:164] Run: docker start ha-791576-m04
	I1202 19:57:41.349696   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.385329   93254 kic.go:430] container "ha-791576-m04" state is running.
	I1202 19:57:41.385673   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:41.416072   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:41.416305   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:57:41.416360   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:41.450379   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:41.450693   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:41.450702   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:57:41.451334   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:57:44.613206   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.613228   93254 ubuntu.go:182] provisioning hostname "ha-791576-m04"
	I1202 19:57:44.613296   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.632442   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.632744   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.632755   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m04 && echo "ha-791576-m04" | sudo tee /etc/hostname
	I1202 19:57:44.799185   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.799313   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.822391   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.822698   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.822720   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:57:44.979513   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:57:44.979597   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:57:44.979629   93254 ubuntu.go:190] setting up certificates
	I1202 19:57:44.979671   93254 provision.go:84] configureAuth start
	I1202 19:57:44.979758   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:45.000651   93254 provision.go:143] copyHostCerts
	I1202 19:57:45.000689   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000721   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:57:45.000728   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000802   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:57:45.001053   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001076   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:57:45.001081   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001115   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:57:45.001161   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001176   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:57:45.001180   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001205   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:57:45.001250   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m04 san=[127.0.0.1 192.168.49.5 ha-791576-m04 localhost minikube]
	I1202 19:57:45.318146   93254 provision.go:177] copyRemoteCerts
	I1202 19:57:45.318219   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:57:45.318283   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.341445   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:45.449731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:57:45.449820   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:57:45.472182   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:57:45.472243   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:57:45.492286   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:57:45.492350   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:57:45.510812   93254 provision.go:87] duration metric: took 531.109583ms to configureAuth
	I1202 19:57:45.510841   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:57:45.511124   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:45.511270   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.531424   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:45.532066   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:45.532093   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:57:45.884616   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:57:45.884638   93254 machine.go:97] duration metric: took 4.468325015s to provisionDockerMachine
	I1202 19:57:45.884650   93254 start.go:293] postStartSetup for "ha-791576-m04" (driver="docker")
	I1202 19:57:45.884699   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:57:45.884775   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:57:45.884823   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.903688   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.015544   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:57:46.019398   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:57:46.019427   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:57:46.019438   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:57:46.019497   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:57:46.019580   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:57:46.019594   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:57:46.019695   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:57:46.027313   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:46.046534   93254 start.go:296] duration metric: took 161.868987ms for postStartSetup
	I1202 19:57:46.046614   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:57:46.046664   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.064651   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.170656   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:57:46.175466   93254 fix.go:56] duration metric: took 5.194844037s for fixHost
	I1202 19:57:46.175488   93254 start.go:83] releasing machines lock for "ha-791576-m04", held for 5.194888303s
	I1202 19:57:46.175556   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:46.195693   93254 out.go:179] * Found network options:
	I1202 19:57:46.198432   93254 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:57:46.201295   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201328   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201354   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201369   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:57:46.201448   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:57:46.201500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.201866   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:57:46.201941   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.219848   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.241958   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.425303   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:57:46.430326   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:57:46.430443   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:57:46.438789   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:57:46.438867   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:57:46.438915   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:57:46.439004   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:57:46.456655   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:57:46.471141   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:57:46.471238   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:57:46.496759   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:57:46.510741   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:57:46.633508   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:57:46.765301   93254 docker.go:234] disabling docker service ...
	I1202 19:57:46.765415   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:57:46.780559   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:57:46.793987   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:57:46.911887   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:57:47.041997   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:57:47.056582   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:57:47.071233   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:57:47.071325   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.080316   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:57:47.080415   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.090821   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.100556   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.110245   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:57:47.121207   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.131994   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.141137   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.150939   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:57:47.158669   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:57:47.166378   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:47.292693   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:47.494962   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:47.495081   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:47.499951   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:47.500031   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:47.503579   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:47.538410   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:47.538551   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.570927   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.607710   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:47.610516   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:47.613449   93254 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:57:47.616291   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:47.633448   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:47.637365   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:47.649386   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:47.649615   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:47.649896   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:47.667951   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:47.668231   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.5
	I1202 19:57:47.668239   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:47.668253   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:47.668379   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:47.668418   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:47.668429   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:47.668440   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:47.668450   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:47.668462   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:47.668518   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:47.668548   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:47.668557   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:47.668584   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:47.668607   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:47.668629   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:47.668673   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:47.668703   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.668715   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.668726   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.668743   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:47.691818   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:47.709295   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:47.728849   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:47.751519   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:47.769113   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:47.789898   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:47.811416   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:47.817999   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:47.826285   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.829982   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.830054   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.872757   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:47.880633   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:47.889438   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893421   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.934334   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:47.942513   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:47.950820   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955232   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955298   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:48.000169   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:48.008314   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:48.014820   93254 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:57:48.014881   93254 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1202 19:57:48.014972   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:48.015054   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:48.026264   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:48.026381   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1202 19:57:48.034605   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:48.048065   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:48.063803   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:48.067995   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:48.077597   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.208286   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.223948   93254 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1202 19:57:48.224395   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:48.229649   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:48.232645   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.363476   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.379483   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:48.379562   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:48.379785   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m04" to be "Ready" ...
	W1202 19:57:50.383622   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:52.383990   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:54.883829   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	I1202 19:57:55.884383   93254 node_ready.go:49] node "ha-791576-m04" is "Ready"
	I1202 19:57:55.884416   93254 node_ready.go:38] duration metric: took 7.504611892s for node "ha-791576-m04" to be "Ready" ...
	I1202 19:57:55.884429   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:55.884499   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:55.899211   93254 system_svc.go:56] duration metric: took 14.774003ms WaitForService to wait for kubelet
	I1202 19:57:55.899239   93254 kubeadm.go:587] duration metric: took 7.675249996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:55.899279   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:55.902757   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902783   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902794   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902800   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902805   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902809   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902813   93254 node_conditions.go:105] duration metric: took 3.530143ms to run NodePressure ...
	I1202 19:57:55.902825   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:55.902850   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:55.903157   93254 ssh_runner.go:195] Run: rm -f paused
	I1202 19:57:55.907062   93254 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:57:55.907561   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:57:55.926185   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:57:57.936730   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:00.437098   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:02.936225   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:04.937647   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:07.433127   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:09.433300   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:11.439409   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:13.936991   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:16.432700   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:18.432998   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	I1202 19:58:19.936601   93254 pod_ready.go:94] pod "coredns-66bc5c9577-hw99j" is "Ready"
	I1202 19:58:19.936627   93254 pod_ready.go:86] duration metric: took 24.01037278s for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.936639   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.946385   93254 pod_ready.go:94] pod "coredns-66bc5c9577-w2245" is "Ready"
	I1202 19:58:19.946408   93254 pod_ready.go:86] duration metric: took 9.76284ms for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.950499   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967558   93254 pod_ready.go:94] pod "etcd-ha-791576" is "Ready"
	I1202 19:58:19.967580   93254 pod_ready.go:86] duration metric: took 17.043001ms for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967589   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983217   93254 pod_ready.go:94] pod "etcd-ha-791576-m02" is "Ready"
	I1202 19:58:19.983312   93254 pod_ready.go:86] duration metric: took 15.715518ms for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983336   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.126953   93254 request.go:683] "Waited before sending request" delay="135.197879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:20.129983   93254 pod_ready.go:99] pod "etcd-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "etcd-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:20.130062   93254 pod_ready.go:86] duration metric: took 146.705626ms for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.327487   93254 request.go:683] "Waited before sending request" delay="197.274849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1202 19:58:20.331946   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.527354   93254 request.go:683] "Waited before sending request" delay="195.301984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576"
	I1202 19:58:20.726783   93254 request.go:683] "Waited before sending request" delay="195.232619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:20.729884   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576" is "Ready"
	I1202 19:58:20.729911   93254 pod_ready.go:86] duration metric: took 397.935401ms for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.729921   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.927333   93254 request.go:683] "Waited before sending request" delay="197.344927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m02"
	I1202 19:58:21.127530   93254 request.go:683] "Waited before sending request" delay="195.226515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m02"
	I1202 19:58:21.134380   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576-m02" is "Ready"
	I1202 19:58:21.134412   93254 pod_ready.go:86] duration metric: took 404.483988ms for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.134423   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.326813   93254 request.go:683] "Waited before sending request" delay="192.320431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m03"
	I1202 19:58:21.527439   93254 request.go:683] "Waited before sending request" delay="197.329437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:21.533492   93254 pod_ready.go:99] pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "kube-apiserver-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:21.533559   93254 pod_ready.go:86] duration metric: took 399.129563ms for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.727056   93254 request.go:683] "Waited before sending request" delay="193.360691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1202 19:58:21.730488   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.926811   93254 request.go:683] "Waited before sending request" delay="196.233661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.127186   93254 request.go:683] "Waited before sending request" delay="194.445087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.326846   93254 request.go:683] "Waited before sending request" delay="96.137701ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.527173   93254 request.go:683] "Waited before sending request" delay="197.340316ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.927176   93254 request.go:683] "Waited before sending request" delay="193.337028ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:23.326849   93254 request.go:683] "Waited before sending request" delay="93.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	W1202 19:58:23.736689   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:25.737056   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:27.748280   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:30.236783   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:32.236980   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:34.736941   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:37.237158   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	I1202 19:58:38.237174   93254 pod_ready.go:94] pod "kube-controller-manager-ha-791576" is "Ready"
	I1202 19:58:38.237206   93254 pod_ready.go:86] duration metric: took 16.506691586s for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:38.237217   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:58:40.244619   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:42.254491   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:44.742876   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:46.743816   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:49.244146   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:51.244844   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:53.742978   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:55.743809   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:58.244614   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:00.270137   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:02.744270   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:04.744321   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:07.244122   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:09.253242   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:11.744525   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:14.244287   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:16.743480   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:18.743527   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:20.744157   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:22.744418   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:25.244307   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:27.244638   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:29.747394   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:32.243699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:34.244795   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:36.744345   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:39.244487   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:41.743981   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:44.244128   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:46.743606   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:49.243339   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:51.244231   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:53.743102   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:56.242882   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:58.243182   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:00.266823   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:02.745097   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:05.243680   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:07.244023   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:09.743730   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:12.243875   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:14.744016   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:17.243913   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:19.244051   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:21.244857   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:23.743729   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:25.744255   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:27.744400   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:30.244688   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:32.247066   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:34.743523   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:37.244239   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:39.743699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:41.744670   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:44.244162   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:46.743513   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:49.245392   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:51.744149   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:54.248947   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:56.743993   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:59.244304   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:01.246223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:03.744505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:06.243892   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:08.743156   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:10.743380   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:12.744647   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:15.244219   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:17.744350   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:20.243654   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:22.245725   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:24.247107   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:26.743319   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:28.743362   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:30.744276   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:33.243318   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:35.245433   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:37.743505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:39.745223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:42.248295   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:44.742894   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:46.744704   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:49.243457   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:51.244130   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:53.745924   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	I1202 20:01:55.907841   93254 pod_ready.go:86] duration metric: took 3m17.670596483s for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:01:55.907902   93254 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1202 20:01:55.907923   93254 pod_ready.go:40] duration metric: took 4m0.000821875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:01:55.911296   93254 out.go:203] 
	W1202 20:01:55.914260   93254 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1202 20:01:55.917058   93254 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.66851571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.69141266Z" level=info msg="Created container d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398: kube-system/storage-provisioner/storage-provisioner" id=1b10ff43-5e40-4558-8196-1d7f016dd505 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.692654188Z" level=info msg="Starting container: d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398" id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.694389348Z" level=info msg="Started container" PID=1429 containerID=d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398 description=kube-system/storage-provisioner/storage-provisioner id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=efd793dccee0e2915ee98b405885350b8a60e3279add6b36c21a4428221c8a01
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.202100018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206090778Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206127076Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206153939Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209705243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209867823Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209904696Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213036515Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213066955Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213094302Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.21610966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.216139813Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.228833217Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=39ed74a3-84e9-4181-80c6-ff0f611a3e84 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.23041474Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=10f326ec-4b42-40a0-bdba-06b31bdd4438 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233901241Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233996722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.250249794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.252295749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.2704154Z" level=info msg="Created container 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.274529003Z" level=info msg="Starting container: 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4" id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.277250428Z" level=info msg="Started container" PID=1479 containerID=2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4 description=kube-system/kube-controller-manager-ha-791576/kube-controller-manager id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4659c27a1e2a230e86c92853e4a009f926841d3b7dc58fbc2c2a31be03f223b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	2f22118538832       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago       Running             kube-controller-manager   7                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	d355d98782252       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       5                   efd793dccee0e       storage-provisioner                 kube-system
	c5b23f7fd12dd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   083931905fb04       busybox-7b57f96db7-l5g8z            default
	5c0daa7c8d4e1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       4                   efd793dccee0e       storage-provisioner                 kube-system
	a7c674fd4beed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   0b0e4231caf19       coredns-66bc5c9577-w2245            kube-system
	1fa21535998b0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   cb80d052040d5       coredns-66bc5c9577-hw99j            kube-system
	355934c2fc929       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   4 minutes ago       Running             kube-proxy                2                   16e723f810dce       kube-proxy-q5vfv                    kube-system
	02e772d860e77       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   9223b1241d5be       kindnet-m2l5j                       kube-system
	ad2e9bee4038e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   4 minutes ago       Exited              kube-controller-manager   6                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	7193dbe9e1382       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   6 minutes ago       Running             kube-scheduler            2                   4b7e6eb9253e6       kube-scheduler-ha-791576            kube-system
	53ec2f9388eca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   6 minutes ago       Running             kube-apiserver            2                   11498d51b1e18       kube-apiserver-ha-791576            kube-system
	9e7e710fc30aa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  2                   447647f67c33c       kube-vip-ha-791576                  kube-system
	935b971802eea       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      2                   5c5f7b2e5b8f1       etcd-ha-791576                      kube-system
	
	
	==> coredns [1fa21535998b03372b957beaac33c0db2b71496fe539f42e2245c5ea3ba2d6e9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47259 - 63703 "HINFO IN 335106981740875206.600763774367396684. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.032064587s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a7c674fd4beedc2112aa22c1ce1eee71496d5b6be459181558118d06ad4a8445] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59040 - 1455 "HINFO IN 6249761343778063196.7050624658331465362. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039193622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m18s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 20m                  kube-proxy       
	  Warning  CgroupV1                 20m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  20m                  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     20m                  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    20m                  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 6m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s (x8 over 6m6s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m28s                node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 3m36s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   RegisteredNode           19m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 6m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m2s (x8 over 6m3s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m2s (x8 over 6m3s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m2s (x8 over 6m3s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m3s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m28s                node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-k9bh8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kindnet-8zbzj               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-proxy-4tffm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 3m54s                  kube-proxy       
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             13m                    node-controller  Node ha-791576-m04 status is now: NodeNotReady
	  Normal   Starting                 4m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m12s (x8 over 4m15s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m12s (x8 over 4m15s)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m12s (x8 over 4m15s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m28s                  node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:55] overlayfs: idmapped layers are currently not supported
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29] <==
	{"level":"info","ts":"2025-12-02T19:57:37.289125Z","caller":"traceutil/trace.go:172","msg":"trace[1743455241] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:3243; }","duration":"2.474069251s","start":"2025-12-02T19:57:34.815049Z","end":"2025-12-02T19:57:37.289118Z","steps":["trace[1743455241] 'agreement among raft nodes before linearized reading'  (duration: 2.474052366s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289212Z","caller":"traceutil/trace.go:172","msg":"trace[721646064] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"2.763305582s","start":"2025-12-02T19:57:34.525901Z","end":"2025-12-02T19:57:37.289207Z","steps":["trace[721646064] 'agreement among raft nodes before linearized reading'  (duration: 2.763290682s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289327Z","caller":"traceutil/trace.go:172","msg":"trace[893577248] range","detail":"{range_begin:/registry/minions/ha-791576-m02; range_end:; response_count:1; response_revision:3243; }","duration":"3.81972494s","start":"2025-12-02T19:57:33.469598Z","end":"2025-12-02T19:57:37.289323Z","steps":["trace[893577248] 'agreement among raft nodes before linearized reading'  (duration: 3.819681109s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289420Z","caller":"traceutil/trace.go:172","msg":"trace[190971960] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:3243; }","duration":"3.852850225s","start":"2025-12-02T19:57:33.436565Z","end":"2025-12-02T19:57:37.289415Z","steps":["trace[190971960] 'agreement among raft nodes before linearized reading'  (duration: 3.852832453s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289533Z","caller":"traceutil/trace.go:172","msg":"trace[248971072] range","detail":"{range_begin:/registry/minions/ha-791576; range_end:; response_count:1; response_revision:3243; }","duration":"4.111340804s","start":"2025-12-02T19:57:33.178187Z","end":"2025-12-02T19:57:37.289528Z","steps":["trace[248971072] 'agreement among raft nodes before linearized reading'  (duration: 4.111300945s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305093Z","caller":"traceutil/trace.go:172","msg":"trace[297507126] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:3243; }","duration":"4.463571378s","start":"2025-12-02T19:57:32.841509Z","end":"2025-12-02T19:57:37.305080Z","steps":["trace[297507126] 'agreement among raft nodes before linearized reading'  (duration: 4.463498813s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305320Z","caller":"traceutil/trace.go:172","msg":"trace[595455530] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:3243; }","duration":"4.464632101s","start":"2025-12-02T19:57:32.840683Z","end":"2025-12-02T19:57:37.305315Z","steps":["trace[595455530] 'agreement among raft nodes before linearized reading'  (duration: 4.464565141s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305395Z","caller":"traceutil/trace.go:172","msg":"trace[375887267] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:3243; }","duration":"4.464720362s","start":"2025-12-02T19:57:32.840668Z","end":"2025-12-02T19:57:37.305388Z","steps":["trace[375887267] 'agreement among raft nodes before linearized reading'  (duration: 4.464704994s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305515Z","caller":"traceutil/trace.go:172","msg":"trace[461441867] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:3243; }","duration":"4.464883911s","start":"2025-12-02T19:57:32.840626Z","end":"2025-12-02T19:57:37.305510Z","steps":["trace[461441867] 'agreement among raft nodes before linearized reading'  (duration: 4.464840827s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305579Z","caller":"traceutil/trace.go:172","msg":"trace[59432717] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:3243; }","duration":"4.46501921s","start":"2025-12-02T19:57:32.840556Z","end":"2025-12-02T19:57:37.305575Z","steps":["trace[59432717] 'agreement among raft nodes before linearized reading'  (duration: 4.465005344s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305697Z","caller":"traceutil/trace.go:172","msg":"trace[1458863396] range","detail":"{range_begin:/registry/leases; range_end:; response_count:0; response_revision:3243; }","duration":"4.465158422s","start":"2025-12-02T19:57:32.840534Z","end":"2025-12-02T19:57:37.305692Z","steps":["trace[1458863396] 'agreement among raft nodes before linearized reading'  (duration: 4.46513325s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305800Z","caller":"traceutil/trace.go:172","msg":"trace[1000282895] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:3243; }","duration":"4.465276582s","start":"2025-12-02T19:57:32.840519Z","end":"2025-12-02T19:57:37.305795Z","steps":["trace[1000282895] 'agreement among raft nodes before linearized reading'  (duration: 4.465257522s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305791Z","caller":"traceutil/trace.go:172","msg":"trace[1507459937] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:3243; }","duration":"4.462075152s","start":"2025-12-02T19:57:32.843708Z","end":"2025-12-02T19:57:37.305783Z","steps":["trace[1507459937] 'agreement among raft nodes before linearized reading'  (duration: 4.462030862s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305902Z","caller":"traceutil/trace.go:172","msg":"trace[1236842159] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:3243; }","duration":"4.465397539s","start":"2025-12-02T19:57:32.840500Z","end":"2025-12-02T19:57:37.305898Z","steps":["trace[1236842159] 'agreement among raft nodes before linearized reading'  (duration: 4.465372333s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305984Z","caller":"traceutil/trace.go:172","msg":"trace[98205234] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"4.465496416s","start":"2025-12-02T19:57:32.840483Z","end":"2025-12-02T19:57:37.305980Z","steps":["trace[98205234] 'agreement among raft nodes before linearized reading'  (duration: 4.465480556s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305983Z","caller":"traceutil/trace.go:172","msg":"trace[651506030] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:3243; }","duration":"4.463451594s","start":"2025-12-02T19:57:32.842526Z","end":"2025-12-02T19:57:37.305977Z","steps":["trace[651506030] 'agreement among raft nodes before linearized reading'  (duration: 4.463413179s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306057Z","caller":"traceutil/trace.go:172","msg":"trace[975673522] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:3243; }","duration":"4.465585932s","start":"2025-12-02T19:57:32.840467Z","end":"2025-12-02T19:57:37.306053Z","steps":["trace[975673522] 'agreement among raft nodes before linearized reading'  (duration: 4.46557223s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306131Z","caller":"traceutil/trace.go:172","msg":"trace[1518714069] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:3243; }","duration":"4.465675768s","start":"2025-12-02T19:57:32.840451Z","end":"2025-12-02T19:57:37.306127Z","steps":["trace[1518714069] 'agreement among raft nodes before linearized reading'  (duration: 4.465661179s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306144Z","caller":"traceutil/trace.go:172","msg":"trace[1421790493] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:3243; }","duration":"4.46438218s","start":"2025-12-02T19:57:32.841756Z","end":"2025-12-02T19:57:37.306138Z","steps":["trace[1421790493] 'agreement among raft nodes before linearized reading'  (duration: 4.464311749s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306208Z","caller":"traceutil/trace.go:172","msg":"trace[1547210265] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"4.465771084s","start":"2025-12-02T19:57:32.840433Z","end":"2025-12-02T19:57:37.306204Z","steps":["trace[1547210265] 'agreement among raft nodes before linearized reading'  (duration: 4.465752828s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306205Z","caller":"traceutil/trace.go:172","msg":"trace[249476617] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:3243; }","duration":"4.464640799s","start":"2025-12-02T19:57:32.841560Z","end":"2025-12-02T19:57:37.306200Z","steps":["trace[249476617] 'agreement among raft nodes before linearized reading'  (duration: 4.464625038s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306287Z","caller":"traceutil/trace.go:172","msg":"trace[1206716498] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:3243; }","duration":"4.465871217s","start":"2025-12-02T19:57:32.840411Z","end":"2025-12-02T19:57:37.306283Z","steps":["trace[1206716498] 'agreement among raft nodes before linearized reading'  (duration: 4.465856891s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306312Z","caller":"traceutil/trace.go:172","msg":"trace[602901791] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:3243; }","duration":"4.464761461s","start":"2025-12-02T19:57:32.841544Z","end":"2025-12-02T19:57:37.306306Z","steps":["trace[602901791] 'agreement among raft nodes before linearized reading'  (duration: 4.464743147s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.314721Z","caller":"traceutil/trace.go:172","msg":"trace[750533836] transaction","detail":"{read_only:false; response_revision:3244; number_of_response:1; }","duration":"3.394937822s","start":"2025-12-02T19:57:33.919770Z","end":"2025-12-02T19:57:37.314708Z","steps":["trace[750533836] 'process raft request'  (duration: 3.394760735s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289644Z","caller":"traceutil/trace.go:172","msg":"trace[1648650429] range","detail":"{range_begin:/registry/leases/kube-node-lease/ha-791576; range_end:; response_count:1; response_revision:3243; }","duration":"4.146672746s","start":"2025-12-02T19:57:33.142966Z","end":"2025-12-02T19:57:37.289639Z","steps":["trace[1648650429] 'agreement among raft nodes before linearized reading'  (duration: 4.146635356s)"],"step_count":1}
	
	
	==> kernel <==
	 20:01:57 up  1:44,  0 user,  load average: 0.88, 1.44, 1.41
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02e772d860e77006ec0b051223b10e67de2ed41ecc1b18874de331cdb32bd1a6] <==
	I1202 20:01:08.209975       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:18.201743       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:18.201847       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:18.202040       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:18.202055       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:18.202112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:18.202125       1 main.go:301] handling current node
	I1202 20:01:28.201536       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:28.201568       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:28.201891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:28.201909       1 main.go:301] handling current node
	I1202 20:01:28.201922       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:28.201928       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:38.201065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:38.201205       1 main.go:301] handling current node
	I1202 20:01:38.201245       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:38.201290       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:38.201567       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:38.201636       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:48.205486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:48.205520       1 main.go:301] handling current node
	I1202 20:01:48.205536       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:48.205541       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:48.205735       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:48.205749       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9] <==
	{"level":"warn","ts":"2025-12-02T19:57:37.266812Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.266832Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d01680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274392Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a21a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274785Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025223c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274836Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001283860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274869Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001e212c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274899Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274921Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002889680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274966Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f383c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274993Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275010Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023a65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275027Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f394a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275097Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400248da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275220Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028890e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279316Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028541e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1202 19:57:37.337298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	{"level":"warn","ts":"2025-12-02T19:57:38.096782Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1202 19:57:38.096878       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.128576061s, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	I1202 19:57:40.624545       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1202 19:57:40.936228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:58:29.433907       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:58:31.983629       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:58:32.004810       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4] <==
	E1202 19:59:09.180866       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180895       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180903       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180909       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180913       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180918       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180924       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	I1202 19:59:09.200950       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233282       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233391       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267544       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267590       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.304785       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.305077       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339802       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339845       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388801       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388937       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.431739       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.432083       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469146       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469262       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512224       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512321       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	I1202 19:59:09.551464       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	
	
	==> kube-controller-manager [ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b] <==
	I1202 19:57:21.480081       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:57:22.307047       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:57:22.307083       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:22.308866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:57:22.309043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:57:22.309144       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:57:22.309457       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1202 19:57:37.311326       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [355934c2fc92908a3d014373a10e2ad38fde6cd637a204a613dd4cf27e58d5de] <==
	I1202 19:57:38.434579       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:57:38.599480       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:57:38.700098       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:57:38.700208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:57:38.700313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:57:38.806652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:57:38.806864       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:57:38.840406       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:57:38.840778       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:57:38.840994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:38.842280       1 config.go:200] "Starting service config controller"
	I1202 19:57:38.842343       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:57:38.842391       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:57:38.842435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:57:38.842472       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:57:38.842507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:57:38.849620       1 config.go:309] "Starting node config controller"
	I1202 19:57:38.849733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:57:38.849766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:57:38.946880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:57:38.946930       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 19:57:38.946999       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068] <==
	I1202 19:55:55.231666       1 serving.go:386] Generated self-signed cert in-memory
	W1202 19:56:01.322035       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 19:56:01.322158       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 19:56:01.322194       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 19:56:01.322238       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 19:56:01.414510       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 19:56:01.414609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:56:01.445556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445721       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:56:01.446001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:56:01.545921       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1202 19:58:29.288494       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	E1202 19:58:29.288769       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4eb2efb8-62a6-4a52-bafd-ddc9837ef293(default/busybox-7b57f96db7-k9bh8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	E1202 19:58:29.288838       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	I1202 19:58:29.290780       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	
	
	==> kubelet <==
	Dec 02 19:57:23 ha-791576 kubelet[806]: E1202 19:57:23.174754     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-791576\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-791576/status?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:32 ha-791576 kubelet[806]: E1202 19:57:32.339488     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-791576?timeout=10s\": context deadline exceeded" interval="800ms"
	Dec 02 19:57:33 ha-791576 kubelet[806]: E1202 19:57:33.176339     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-791576\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-791576?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:35 ha-791576 kubelet[806]: E1202 19:57:35.968777     806 kubelet.go:3222] "Failed creating a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:57:35 ha-791576 kubelet[806]: I1202 19:57:35.968822     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.477293     806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.550540     806 scope.go:117] "RemoveContainer" containerID="1481b78f0b49db2c5b77d1f4b1a48f1606d7b5b7efc574d9920be0dcf7d60944"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.551052     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:37 ha-791576 kubelet[806]: E1202 19:57:37.551183     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:38 ha-791576 kubelet[806]: W1202 19:57:38.005619     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c WatchSource:0}: Error finding container 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c: Status 404 returned error can't find the container with id 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.163483     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-791576\" already exists" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.163520     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.241716     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-791576\" already exists" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.576730     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.577312     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:45 ha-791576 kubelet[806]: I1202 19:57:45.547133     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:45 ha-791576 kubelet[806]: E1202 19:57:45.547777     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.235433     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log: no such file or directory
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.237620     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log: no such file or directory
	Dec 02 19:57:58 ha-791576 kubelet[806]: I1202 19:57:58.228513     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:58 ha-791576 kubelet[806]: E1202 19:57:58.229379     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:08 ha-791576 kubelet[806]: I1202 19:58:08.659780     806 scope.go:117] "RemoveContainer" containerID="5c0daa7c8d4e1a9a2a77b1849e4249d4f9f28faa84c47fbc750bdf4924430591"
	Dec 02 19:58:11 ha-791576 kubelet[806]: I1202 19:58:11.230446     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:58:11 ha-791576 kubelet[806]: E1202 19:58:11.230623     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:26 ha-791576 kubelet[806]: I1202 19:58:26.228365     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (374.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-791576" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-791576\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-791576\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-791576\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:55:44.458015606Z",
	            "FinishedAt": "2025-12-02T19:55:43.73005975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "751177d5ee464382bdbbbb72de4fb526573054bfa543b68ed932cd0c1d287957",
	            "SandboxKey": "/var/run/docker/netns/751177d5ee46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:0b:05:fd:a7:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "f86c1b624622b29b058cdcb9ce2cd5d942bc8d95518744c77b2a01273b6d217e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.683612379s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:55:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:55:44.177967   93254 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:44.178109   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178122   93254 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:44.178128   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178419   93254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:44.178766   93254 out.go:368] Setting JSON to false
	I1202 19:55:44.179556   93254 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5883,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:55:44.179622   93254 start.go:143] virtualization:  
	I1202 19:55:44.182617   93254 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:55:44.186436   93254 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:55:44.186585   93254 notify.go:221] Checking for updates...
	I1202 19:55:44.192062   93254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:55:44.194974   93254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:44.197803   93254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:55:44.200682   93254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:55:44.203721   93254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:55:44.206951   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:44.207525   93254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:55:44.231700   93254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:55:44.231811   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.301596   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.287047316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.301733   93254 docker.go:319] overlay module found
	I1202 19:55:44.304924   93254 out.go:179] * Using the docker driver based on existing profile
	I1202 19:55:44.307862   93254 start.go:309] selected driver: docker
	I1202 19:55:44.307884   93254 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kube
flow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.308026   93254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:55:44.308131   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.371573   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.362799023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.372011   93254 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:55:44.372042   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:44.372097   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:44.372154   93254 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.377185   93254 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:55:44.379977   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:44.382846   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:44.385821   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:44.385879   93254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:55:44.385893   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:44.385993   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:44.386008   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:44.386151   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.386369   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:44.405321   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:44.405352   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:44.405373   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:44.405404   93254 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:44.405469   93254 start.go:364] duration metric: took 41.304µs to acquireMachinesLock for "ha-791576"
	I1202 19:55:44.405492   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:44.405502   93254 fix.go:54] fixHost starting: 
	I1202 19:55:44.405802   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.422067   93254 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:55:44.422096   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:44.425385   93254 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:55:44.425482   93254 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:55:44.656773   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.678497   93254 kic.go:430] container "ha-791576" state is running.
	I1202 19:55:44.678860   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:44.708256   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.708493   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:44.708552   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:44.731511   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:44.731837   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:44.731849   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:44.733165   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:47.885197   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:47.885250   93254 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:55:47.885314   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:47.903491   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:47.903813   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:47.903827   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:55:48.069176   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:48.069254   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.089514   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.089877   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.089901   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:48.242008   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:48.242032   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:48.242057   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:48.242069   93254 provision.go:84] configureAuth start
	I1202 19:55:48.242132   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:48.261821   93254 provision.go:143] copyHostCerts
	I1202 19:55:48.261871   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.261931   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:48.261951   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.262038   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:48.262141   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262166   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:48.262174   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262211   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:48.262289   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262314   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:48.262323   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262355   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:48.262435   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:55:48.452060   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:48.452139   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:48.452177   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.470613   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:48.573192   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:48.573250   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:48.589521   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:48.589763   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:55:48.606218   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:48.606297   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:48.623387   93254 provision.go:87] duration metric: took 381.29482ms to configureAuth
	I1202 19:55:48.623419   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:48.623653   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:48.623765   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.640254   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.640566   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.640586   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:49.030725   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:49.030745   93254 machine.go:97] duration metric: took 4.32224289s to provisionDockerMachine
	I1202 19:55:49.030757   93254 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:55:49.030768   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:49.030827   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:49.030865   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.051519   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.153353   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:49.156583   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:49.156607   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:49.156618   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:49.156674   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:49.156758   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:49.156764   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:49.156861   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:49.164042   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:49.180380   93254 start.go:296] duration metric: took 149.593959ms for postStartSetup
	I1202 19:55:49.180465   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:49.180519   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.197329   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.298832   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:49.303554   93254 fix.go:56] duration metric: took 4.898044691s for fixHost
	I1202 19:55:49.303578   93254 start.go:83] releasing machines lock for "ha-791576", held for 4.898097178s
	I1202 19:55:49.303651   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:49.320407   93254 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:49.320456   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.320470   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:49.320533   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.338342   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.345505   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.524252   93254 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:49.530647   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:49.565296   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:49.569498   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:49.569577   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:49.577094   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:49.577167   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:49.577205   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:49.577256   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:49.592079   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:49.605549   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:49.605621   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:49.621023   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:49.635753   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:49.750982   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:49.859462   93254 docker.go:234] disabling docker service ...
	I1202 19:55:49.859565   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:49.874667   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:49.887012   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:50.007847   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:50.134338   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:50.146986   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:50.161229   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:50.161317   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.170383   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:55:50.170453   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.179542   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.188652   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.197399   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:50.205856   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.214897   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.223103   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.231783   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:50.238878   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:50.245749   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:50.382453   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:50.564448   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:50.564526   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:50.568176   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:55:50.568235   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:55:50.571563   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:50.595656   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:50.595739   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.625390   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.655103   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:50.658061   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:50.674479   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:50.678575   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.688260   93254 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:50.688998   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:50.689083   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.726565   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.726626   93254 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:50.726708   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.756058   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.756081   93254 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:50.756091   93254 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:50.756189   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:50.756269   93254 ssh_runner.go:195] Run: crio config
	I1202 19:55:50.831624   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:50.831657   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:50.831710   93254 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:50.831742   93254 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:50.831887   93254 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:50.831904   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:55:50.831959   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:55:50.843196   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:50.843290   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:55:50.843354   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:50.850587   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:50.850656   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:55:50.857765   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:55:50.869276   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:50.881241   93254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:55:50.893240   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:55:50.905823   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:50.909303   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.918750   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:51.026144   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:51.042322   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:55:51.042383   93254 certs.go:195] generating shared ca certs ...
	I1202 19:55:51.042413   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.042572   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:55:51.042673   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:55:51.042696   93254 certs.go:257] generating profile certs ...
	I1202 19:55:51.042790   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:55:51.042844   93254 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f
	I1202 19:55:51.042883   93254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1202 19:55:51.207706   93254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f ...
	I1202 19:55:51.207774   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f: {Name:mk0befc0b318cce17722eedc60197d074ef72403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208003   93254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f ...
	I1202 19:55:51.208041   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f: {Name:mk6747dc6a0e6b21e4d9bc0a0b21cc4e1f72108f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208176   93254 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:55:51.208351   93254 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:55:51.208521   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:55:51.208562   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:55:51.208598   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:55:51.208631   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:55:51.208669   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:55:51.208699   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:55:51.208731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:55:51.208772   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:55:51.208803   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:55:51.208876   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:55:51.208937   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:55:51.208962   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:55:51.209012   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:51.209063   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:51.209110   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:51.209189   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:51.209271   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.209343   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.209384   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.210038   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:51.231782   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:51.250385   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:51.267781   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:55:51.286345   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:55:51.304523   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:55:51.322173   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:51.340727   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:51.358222   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:55:51.376555   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:51.392531   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:55:51.409238   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:51.421079   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:55:51.427316   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:55:51.435537   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.438995   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.439062   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.479993   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:55:51.487626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:51.495524   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499393   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.539899   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:51.548401   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:55:51.556378   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559859   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559918   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.600611   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:55:51.608321   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:51.611874   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:55:51.656450   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:55:51.699650   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:55:51.748675   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:55:51.798307   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:55:51.891003   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:55:51.960070   93254 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:51.960253   93254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:51.960360   93254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:52.020134   93254 cri.go:89] found id: "7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068"
	I1202 19:55:52.020208   93254 cri.go:89] found id: "53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9"
	I1202 19:55:52.020237   93254 cri.go:89] found id: "9e7e710fc30aaba995500f37ffa3972d03427ad4b5096ea5e3f635761be6fe1e"
	I1202 19:55:52.020256   93254 cri.go:89] found id: "b0964e2af680e31e59bc41f16955d47d76026029392b1597b247a7226618e258"
	I1202 19:55:52.020292   93254 cri.go:89] found id: "935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29"
	I1202 19:55:52.020316   93254 cri.go:89] found id: ""
	I1202 19:55:52.020420   93254 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:55:52.039471   93254 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:55:52.039648   93254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:52.052041   93254 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:55:52.052113   93254 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:55:52.052202   93254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:55:52.067291   93254 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:52.067793   93254 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.067946   93254 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:55:52.068355   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.069044   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:55:52.069935   93254 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:55:52.070037   93254 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:55:52.070083   93254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:55:52.070105   93254 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:55:52.070125   93254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:55:52.070010   93254 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:55:52.070578   93254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:55:52.089251   93254 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:55:52.089343   93254 kubeadm.go:602] duration metric: took 37.210796ms to restartPrimaryControlPlane
	I1202 19:55:52.089369   93254 kubeadm.go:403] duration metric: took 129.308895ms to StartCluster
	I1202 19:55:52.089422   93254 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.089527   93254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.090263   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.090544   93254 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:52.090598   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:55:52.090630   93254 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:55:52.091558   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.096453   93254 out.go:179] * Enabled addons: 
	I1202 19:55:52.099512   93254 addons.go:530] duration metric: took 8.877075ms for enable addons: enabled=[]
	I1202 19:55:52.099607   93254 start.go:247] waiting for cluster config update ...
	I1202 19:55:52.099630   93254 start.go:256] writing updated cluster config ...
	I1202 19:55:52.102945   93254 out.go:203] 
	I1202 19:55:52.106144   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.106258   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.109518   93254 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:55:52.112289   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:52.115487   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:52.118244   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:52.118264   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:52.118378   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:52.118387   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:52.118504   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.118707   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:52.150292   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:52.150314   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:52.150328   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:52.150350   93254 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:52.150401   93254 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:55:52.150419   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:52.150424   93254 fix.go:54] fixHost starting: m02
	I1202 19:55:52.150685   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.190695   93254 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:55:52.190719   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:52.194176   93254 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:55:52.194252   93254 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:55:52.599976   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.629412   93254 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:55:52.629885   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:52.664048   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.664285   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:52.664350   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:52.688321   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:52.688636   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:52.688648   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:52.689286   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:55.971095   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:55.971155   93254 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:55:55.971238   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:55.998825   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:55.999132   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:55.999149   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:55:56.285260   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:56.285380   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:56.324784   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:56.325097   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:56.325112   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:56.574478   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:56.574546   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:56.574578   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:56.574609   93254 provision.go:84] configureAuth start
	I1202 19:55:56.574702   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:56.605527   93254 provision.go:143] copyHostCerts
	I1202 19:55:56.605564   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605607   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:56.605617   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605764   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:56.605858   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605875   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:56.605880   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605907   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:56.605945   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605961   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:56.605965   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605988   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:56.606032   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:55:57.020409   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:57.020550   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:57.020628   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.038510   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:57.153644   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:57.153716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:57.184300   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:57.184359   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:55:57.266970   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:57.267064   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:57.331675   93254 provision.go:87] duration metric: took 757.029391ms to configureAuth
	I1202 19:55:57.331740   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:57.331983   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:57.332101   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.363340   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:57.363649   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:57.363662   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:58.504594   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:58.504673   93254 machine.go:97] duration metric: took 5.840377716s to provisionDockerMachine
	I1202 19:55:58.504698   93254 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:55:58.504722   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:58.504818   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:58.504881   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.552759   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.683948   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:58.687504   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:58.687528   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:58.687538   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:58.687590   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:58.687661   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:58.687667   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:58.687766   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:58.696105   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:58.729078   93254 start.go:296] duration metric: took 224.353376ms for postStartSetup
	I1202 19:55:58.729200   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:58.729258   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.748281   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.865403   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:58.871596   93254 fix.go:56] duration metric: took 6.721165168s for fixHost
	I1202 19:55:58.871617   93254 start.go:83] releasing machines lock for "ha-791576-m02", held for 6.7212084s
	I1202 19:55:58.871682   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:58.902526   93254 out.go:179] * Found network options:
	I1202 19:55:58.905433   93254 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:55:58.908359   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:55:58.908394   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:55:58.908458   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:58.908500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.908758   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:58.908808   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.941876   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.957861   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:59.379469   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:59.393428   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:59.393549   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:59.436981   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:59.437054   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:59.437109   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:59.437185   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:59.476789   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:59.492965   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:59.493030   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:59.510203   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:59.535902   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:59.890794   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:56:00.391688   93254 docker.go:234] disabling docker service ...
	I1202 19:56:00.391868   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:56:00.454884   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:56:00.506073   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:56:00.797340   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:56:01.166082   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:56:01.219009   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:56:01.256352   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:56:01.256455   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.307607   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:56:01.307708   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.346124   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.369272   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.393260   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:56:01.408865   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.438945   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.451063   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.488074   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:56:01.499136   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:56:01.507846   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:56:01.747608   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:32.000346   93254 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.252704452s)
	I1202 19:57:32.000372   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:32.000423   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:32.004239   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:32.004296   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:32.007869   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:32.036443   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:32.036523   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.065233   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.100050   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:32.103063   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:32.106043   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:32.121822   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:32.126366   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:32.138121   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:32.138366   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:32.138687   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:32.155548   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:32.155827   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:57:32.155834   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:32.155849   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:32.155961   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:32.156000   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:32.156007   93254 certs.go:257] generating profile certs ...
	I1202 19:57:32.156076   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:57:32.156141   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.8b416d14
	I1202 19:57:32.156181   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:57:32.156189   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:32.156201   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:32.156212   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:32.156222   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:32.156232   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:57:32.156243   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:57:32.156253   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:57:32.156264   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:57:32.156310   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:32.156339   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:32.156347   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:32.156372   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:32.156396   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:32.156422   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:32.156466   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:32.156496   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.156509   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.156520   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.156574   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:57:32.173330   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:57:32.269964   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:57:32.273629   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:57:32.281594   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:57:32.284955   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:57:32.292668   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:57:32.296257   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:57:32.304405   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:57:32.307845   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:57:32.316416   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:57:32.319715   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:57:32.331425   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:57:32.335418   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:57:32.345158   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:32.362660   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:32.381060   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:32.399011   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:32.417547   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:57:32.436697   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:57:32.454716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:57:32.472049   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:57:32.488952   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:32.507493   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:32.525119   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:32.543594   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:57:32.556208   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:57:32.568883   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:57:32.582212   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:57:32.594098   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:57:32.606261   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:57:32.618196   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:57:32.631378   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:32.637197   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:32.645952   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.649933   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.650038   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.692551   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:32.700398   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:32.708435   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.711984   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.712047   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.752921   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:32.760626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:32.768641   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772345   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772443   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.817730   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:32.825349   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:32.829063   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:57:32.869702   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:57:32.910289   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:57:32.951408   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:57:32.991818   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:57:33.032586   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:57:33.073299   93254 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:57:33.073392   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:33.073421   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:57:33.073489   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:57:33.084964   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:57:33.085019   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:57:33.085079   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:33.092389   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:33.092504   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:57:33.099839   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:33.111954   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:33.124537   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:57:33.139421   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:33.144249   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:33.154311   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.286984   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.300875   93254 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:57:33.301346   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:33.304919   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:33.307970   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.441136   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.455239   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:33.455306   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:33.455557   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330869   93254 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:57:37.330905   93254 node_ready.go:38] duration metric: took 3.875318836s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330920   93254 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:57:37.330980   93254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:57:37.350335   93254 api_server.go:72] duration metric: took 4.049370544s to wait for apiserver process to appear ...
	I1202 19:57:37.350361   93254 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:57:37.350381   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.437921   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.437997   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:37.850509   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.877801   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.877836   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.351486   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.375050   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.375085   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.850665   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.878543   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.878572   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.351038   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.378413   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.378441   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.850846   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.864441   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.864468   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.350812   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.361521   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:40.361559   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.850824   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.864753   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:57:40.866306   93254 api_server.go:141] control plane version: v1.34.2
	I1202 19:57:40.866336   93254 api_server.go:131] duration metric: took 3.51596701s to wait for apiserver health ...
	I1202 19:57:40.866371   93254 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:57:40.881984   93254 system_pods.go:59] 26 kube-system pods found
	I1202 19:57:40.882074   93254 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882090   93254 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882098   93254 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.882107   93254 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.882112   93254 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.882116   93254 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.882146   93254 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.882164   93254 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.882169   93254 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.882175   93254 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.882183   93254 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.882192   93254 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.882207   93254 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.882228   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.882258   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.882267   93254 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.882271   93254 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.882280   93254 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.882288   93254 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.882291   93254 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.882295   93254 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.882298   93254 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.882302   93254 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.882306   93254 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.882325   93254 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.882337   93254 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.882356   93254 system_pods.go:74] duration metric: took 15.961542ms to wait for pod list to return data ...
	I1202 19:57:40.882368   93254 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:57:40.886711   93254 default_sa.go:45] found service account: "default"
	I1202 19:57:40.886765   93254 default_sa.go:55] duration metric: took 4.377498ms for default service account to be created ...
	I1202 19:57:40.886816   93254 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:57:40.896351   93254 system_pods.go:86] 26 kube-system pods found
	I1202 19:57:40.896402   93254 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896455   93254 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896471   93254 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.896477   93254 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.896488   93254 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.896493   93254 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.896517   93254 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.896529   93254 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.896547   93254 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.896561   93254 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.896567   93254 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.896577   93254 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.896584   93254 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.896589   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.896594   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.896605   93254 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.896635   93254 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.896647   93254 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.896651   93254 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.896655   93254 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.896660   93254 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.896669   93254 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.896714   93254 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.896731   93254 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.896736   93254 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.896740   93254 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.896767   93254 system_pods.go:126] duration metric: took 9.944455ms to wait for k8s-apps to be running ...
	I1202 19:57:40.896779   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:40.896851   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:40.912940   93254 system_svc.go:56] duration metric: took 16.146284ms WaitForService to wait for kubelet
	I1202 19:57:40.912971   93254 kubeadm.go:587] duration metric: took 7.612010896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:40.913011   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:40.922663   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922709   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922747   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922761   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922765   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922770   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922782   93254 node_conditions.go:105] duration metric: took 9.75895ms to run NodePressure ...
	I1202 19:57:40.922797   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:40.922840   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:40.926963   93254 out.go:203] 
	I1202 19:57:40.930189   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:40.930349   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.933758   93254 out.go:179] * Starting "ha-791576-m04" worker node in "ha-791576" cluster
	I1202 19:57:40.937496   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:57:40.940562   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:57:40.944509   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:57:40.944573   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:57:40.944591   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:57:40.944689   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:57:40.944700   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:57:40.944847   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.980485   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:57:40.980503   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:57:40.980516   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:57:40.980539   93254 start.go:360] acquireMachinesLock for ha-791576-m04: {Name:mkf6d085e6ffaf9b8d3c89207d22561aa64cc068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:57:40.980591   93254 start.go:364] duration metric: took 37.824µs to acquireMachinesLock for "ha-791576-m04"
	I1202 19:57:40.980609   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:57:40.980616   93254 fix.go:54] fixHost starting: m04
	I1202 19:57:40.980868   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.009962   93254 fix.go:112] recreateIfNeeded on ha-791576-m04: state=Stopped err=<nil>
	W1202 19:57:41.009990   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:57:41.013529   93254 out.go:252] * Restarting existing docker container for "ha-791576-m04" ...
	I1202 19:57:41.013708   93254 cli_runner.go:164] Run: docker start ha-791576-m04
	I1202 19:57:41.349696   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.385329   93254 kic.go:430] container "ha-791576-m04" state is running.
	I1202 19:57:41.385673   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:41.416072   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:41.416305   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:57:41.416360   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:41.450379   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:41.450693   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:41.450702   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:57:41.451334   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:57:44.613206   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.613228   93254 ubuntu.go:182] provisioning hostname "ha-791576-m04"
	I1202 19:57:44.613296   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.632442   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.632744   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.632755   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m04 && echo "ha-791576-m04" | sudo tee /etc/hostname
	I1202 19:57:44.799185   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.799313   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.822391   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.822698   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.822720   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:57:44.979513   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:57:44.979597   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:57:44.979629   93254 ubuntu.go:190] setting up certificates
	I1202 19:57:44.979671   93254 provision.go:84] configureAuth start
	I1202 19:57:44.979758   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:45.000651   93254 provision.go:143] copyHostCerts
	I1202 19:57:45.000689   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000721   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:57:45.000728   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000802   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:57:45.001053   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001076   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:57:45.001081   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001115   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:57:45.001161   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001176   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:57:45.001180   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001205   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:57:45.001250   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m04 san=[127.0.0.1 192.168.49.5 ha-791576-m04 localhost minikube]
	I1202 19:57:45.318146   93254 provision.go:177] copyRemoteCerts
	I1202 19:57:45.318219   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:57:45.318283   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.341445   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:45.449731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:57:45.449820   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:57:45.472182   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:57:45.472243   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:57:45.492286   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:57:45.492350   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:57:45.510812   93254 provision.go:87] duration metric: took 531.109583ms to configureAuth
	I1202 19:57:45.510841   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:57:45.511124   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:45.511270   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.531424   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:45.532066   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:45.532093   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:57:45.884616   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:57:45.884638   93254 machine.go:97] duration metric: took 4.468325015s to provisionDockerMachine
	I1202 19:57:45.884650   93254 start.go:293] postStartSetup for "ha-791576-m04" (driver="docker")
	I1202 19:57:45.884699   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:57:45.884775   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:57:45.884823   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.903688   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.015544   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:57:46.019398   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:57:46.019427   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:57:46.019438   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:57:46.019497   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:57:46.019580   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:57:46.019594   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:57:46.019695   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:57:46.027313   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:46.046534   93254 start.go:296] duration metric: took 161.868987ms for postStartSetup
	I1202 19:57:46.046614   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:57:46.046664   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.064651   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.170656   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:57:46.175466   93254 fix.go:56] duration metric: took 5.194844037s for fixHost
	I1202 19:57:46.175488   93254 start.go:83] releasing machines lock for "ha-791576-m04", held for 5.194888303s
	I1202 19:57:46.175556   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:46.195693   93254 out.go:179] * Found network options:
	I1202 19:57:46.198432   93254 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:57:46.201295   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201328   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201354   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201369   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:57:46.201448   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:57:46.201500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.201866   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:57:46.201941   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.219848   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.241958   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.425303   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:57:46.430326   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:57:46.430443   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:57:46.438789   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:57:46.438867   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:57:46.438915   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:57:46.439004   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:57:46.456655   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:57:46.471141   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:57:46.471238   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:57:46.496759   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:57:46.510741   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:57:46.633508   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:57:46.765301   93254 docker.go:234] disabling docker service ...
	I1202 19:57:46.765415   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:57:46.780559   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:57:46.793987   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:57:46.911887   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:57:47.041997   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:57:47.056582   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:57:47.071233   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:57:47.071325   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.080316   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:57:47.080415   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.090821   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.100556   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.110245   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:57:47.121207   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.131994   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.141137   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.150939   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:57:47.158669   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:57:47.166378   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:47.292693   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:47.494962   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:47.495081   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:47.499951   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:47.500031   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:47.503579   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:47.538410   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:47.538551   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.570927   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.607710   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:47.610516   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:47.613449   93254 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:57:47.616291   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:47.633448   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:47.637365   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:47.649386   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:47.649615   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:47.649896   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:47.667951   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:47.668231   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.5
	I1202 19:57:47.668239   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:47.668253   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:47.668379   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:47.668418   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:47.668429   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:47.668440   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:47.668450   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:47.668462   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:47.668518   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:47.668548   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:47.668557   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:47.668584   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:47.668607   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:47.668629   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:47.668673   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:47.668703   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.668715   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.668726   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.668743   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:47.691818   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:47.709295   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:47.728849   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:47.751519   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:47.769113   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:47.789898   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:47.811416   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:47.817999   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:47.826285   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.829982   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.830054   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.872757   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:47.880633   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:47.889438   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893421   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.934334   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:47.942513   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:47.950820   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955232   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955298   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:48.000169   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:48.008314   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:48.014820   93254 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:57:48.014881   93254 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1202 19:57:48.014972   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:48.015054   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:48.026264   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:48.026381   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1202 19:57:48.034605   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:48.048065   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:48.063803   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:48.067995   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:48.077597   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.208286   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.223948   93254 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1202 19:57:48.224395   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:48.229649   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:48.232645   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.363476   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.379483   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:48.379562   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:48.379785   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m04" to be "Ready" ...
	W1202 19:57:50.383622   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:52.383990   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:54.883829   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	I1202 19:57:55.884383   93254 node_ready.go:49] node "ha-791576-m04" is "Ready"
	I1202 19:57:55.884416   93254 node_ready.go:38] duration metric: took 7.504611892s for node "ha-791576-m04" to be "Ready" ...
	I1202 19:57:55.884429   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:55.884499   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:55.899211   93254 system_svc.go:56] duration metric: took 14.774003ms WaitForService to wait for kubelet
	I1202 19:57:55.899239   93254 kubeadm.go:587] duration metric: took 7.675249996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:55.899279   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:55.902757   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902783   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902794   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902800   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902805   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902809   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902813   93254 node_conditions.go:105] duration metric: took 3.530143ms to run NodePressure ...
	I1202 19:57:55.902825   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:55.902850   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:55.903157   93254 ssh_runner.go:195] Run: rm -f paused
	I1202 19:57:55.907062   93254 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:57:55.907561   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:57:55.926185   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:57:57.936730   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:00.437098   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:02.936225   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:04.937647   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:07.433127   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:09.433300   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:11.439409   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:13.936991   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:16.432700   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:18.432998   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	I1202 19:58:19.936601   93254 pod_ready.go:94] pod "coredns-66bc5c9577-hw99j" is "Ready"
	I1202 19:58:19.936627   93254 pod_ready.go:86] duration metric: took 24.01037278s for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.936639   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.946385   93254 pod_ready.go:94] pod "coredns-66bc5c9577-w2245" is "Ready"
	I1202 19:58:19.946408   93254 pod_ready.go:86] duration metric: took 9.76284ms for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.950499   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967558   93254 pod_ready.go:94] pod "etcd-ha-791576" is "Ready"
	I1202 19:58:19.967580   93254 pod_ready.go:86] duration metric: took 17.043001ms for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967589   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983217   93254 pod_ready.go:94] pod "etcd-ha-791576-m02" is "Ready"
	I1202 19:58:19.983312   93254 pod_ready.go:86] duration metric: took 15.715518ms for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983336   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.126953   93254 request.go:683] "Waited before sending request" delay="135.197879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:20.129983   93254 pod_ready.go:99] pod "etcd-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "etcd-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:20.130062   93254 pod_ready.go:86] duration metric: took 146.705626ms for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.327487   93254 request.go:683] "Waited before sending request" delay="197.274849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1202 19:58:20.331946   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.527354   93254 request.go:683] "Waited before sending request" delay="195.301984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576"
	I1202 19:58:20.726783   93254 request.go:683] "Waited before sending request" delay="195.232619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:20.729884   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576" is "Ready"
	I1202 19:58:20.729911   93254 pod_ready.go:86] duration metric: took 397.935401ms for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.729921   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.927333   93254 request.go:683] "Waited before sending request" delay="197.344927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m02"
	I1202 19:58:21.127530   93254 request.go:683] "Waited before sending request" delay="195.226515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m02"
	I1202 19:58:21.134380   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576-m02" is "Ready"
	I1202 19:58:21.134412   93254 pod_ready.go:86] duration metric: took 404.483988ms for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.134423   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.326813   93254 request.go:683] "Waited before sending request" delay="192.320431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m03"
	I1202 19:58:21.527439   93254 request.go:683] "Waited before sending request" delay="197.329437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:21.533492   93254 pod_ready.go:99] pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "kube-apiserver-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:21.533559   93254 pod_ready.go:86] duration metric: took 399.129563ms for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.727056   93254 request.go:683] "Waited before sending request" delay="193.360691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1202 19:58:21.730488   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.926811   93254 request.go:683] "Waited before sending request" delay="196.233661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.127186   93254 request.go:683] "Waited before sending request" delay="194.445087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.326846   93254 request.go:683] "Waited before sending request" delay="96.137701ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.527173   93254 request.go:683] "Waited before sending request" delay="197.340316ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.927176   93254 request.go:683] "Waited before sending request" delay="193.337028ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:23.326849   93254 request.go:683] "Waited before sending request" delay="93.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	W1202 19:58:23.736689   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:25.737056   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:27.748280   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:30.236783   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:32.236980   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:34.736941   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:37.237158   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	I1202 19:58:38.237174   93254 pod_ready.go:94] pod "kube-controller-manager-ha-791576" is "Ready"
	I1202 19:58:38.237206   93254 pod_ready.go:86] duration metric: took 16.506691586s for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:38.237217   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:58:40.244619   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:42.254491   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:44.742876   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:46.743816   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:49.244146   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:51.244844   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:53.742978   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:55.743809   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:58.244614   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:00.270137   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:02.744270   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:04.744321   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:07.244122   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:09.253242   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:11.744525   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:14.244287   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:16.743480   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:18.743527   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:20.744157   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:22.744418   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:25.244307   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:27.244638   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:29.747394   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:32.243699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:34.244795   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:36.744345   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:39.244487   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:41.743981   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:44.244128   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:46.743606   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:49.243339   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:51.244231   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:53.743102   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:56.242882   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:58.243182   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:00.266823   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:02.745097   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:05.243680   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:07.244023   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:09.743730   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:12.243875   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:14.744016   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:17.243913   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:19.244051   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:21.244857   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:23.743729   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:25.744255   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:27.744400   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:30.244688   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:32.247066   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:34.743523   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:37.244239   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:39.743699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:41.744670   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:44.244162   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:46.743513   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:49.245392   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:51.744149   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:54.248947   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:56.743993   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:59.244304   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:01.246223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:03.744505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:06.243892   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:08.743156   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:10.743380   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:12.744647   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:15.244219   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:17.744350   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:20.243654   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:22.245725   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:24.247107   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:26.743319   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:28.743362   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:30.744276   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:33.243318   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:35.245433   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:37.743505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:39.745223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:42.248295   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:44.742894   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:46.744704   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:49.243457   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:51.244130   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:53.745924   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	I1202 20:01:55.907841   93254 pod_ready.go:86] duration metric: took 3m17.670596483s for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:01:55.907902   93254 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1202 20:01:55.907923   93254 pod_ready.go:40] duration metric: took 4m0.000821875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:01:55.911296   93254 out.go:203] 
	W1202 20:01:55.914260   93254 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1202 20:01:55.917058   93254 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.66851571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.69141266Z" level=info msg="Created container d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398: kube-system/storage-provisioner/storage-provisioner" id=1b10ff43-5e40-4558-8196-1d7f016dd505 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.692654188Z" level=info msg="Starting container: d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398" id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.694389348Z" level=info msg="Started container" PID=1429 containerID=d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398 description=kube-system/storage-provisioner/storage-provisioner id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=efd793dccee0e2915ee98b405885350b8a60e3279add6b36c21a4428221c8a01
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.202100018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206090778Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206127076Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206153939Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209705243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209867823Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209904696Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213036515Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213066955Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213094302Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.21610966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.216139813Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.228833217Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=39ed74a3-84e9-4181-80c6-ff0f611a3e84 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.23041474Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=10f326ec-4b42-40a0-bdba-06b31bdd4438 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233901241Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233996722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.250249794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.252295749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.2704154Z" level=info msg="Created container 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.274529003Z" level=info msg="Starting container: 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4" id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.277250428Z" level=info msg="Started container" PID=1479 containerID=2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4 description=kube-system/kube-controller-manager-ha-791576/kube-controller-manager id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4659c27a1e2a230e86c92853e4a009f926841d3b7dc58fbc2c2a31be03f223b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	2f22118538832       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago       Running             kube-controller-manager   7                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	d355d98782252       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       5                   efd793dccee0e       storage-provisioner                 kube-system
	c5b23f7fd12dd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   083931905fb04       busybox-7b57f96db7-l5g8z            default
	5c0daa7c8d4e1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       4                   efd793dccee0e       storage-provisioner                 kube-system
	a7c674fd4beed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   0b0e4231caf19       coredns-66bc5c9577-w2245            kube-system
	1fa21535998b0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   cb80d052040d5       coredns-66bc5c9577-hw99j            kube-system
	355934c2fc929       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   4 minutes ago       Running             kube-proxy                2                   16e723f810dce       kube-proxy-q5vfv                    kube-system
	02e772d860e77       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   9223b1241d5be       kindnet-m2l5j                       kube-system
	ad2e9bee4038e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   4 minutes ago       Exited              kube-controller-manager   6                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	7193dbe9e1382       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   6 minutes ago       Running             kube-scheduler            2                   4b7e6eb9253e6       kube-scheduler-ha-791576            kube-system
	53ec2f9388eca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   6 minutes ago       Running             kube-apiserver            2                   11498d51b1e18       kube-apiserver-ha-791576            kube-system
	9e7e710fc30aa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  2                   447647f67c33c       kube-vip-ha-791576                  kube-system
	935b971802eea       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   6 minutes ago       Running             etcd                      2                   5c5f7b2e5b8f1       etcd-ha-791576                      kube-system
	
	
	==> coredns [1fa21535998b03372b957beaac33c0db2b71496fe539f42e2245c5ea3ba2d6e9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47259 - 63703 "HINFO IN 335106981740875206.600763774367396684. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.032064587s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a7c674fd4beedc2112aa22c1ce1eee71496d5b6be459181558118d06ad4a8445] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59040 - 1455 "HINFO IN 6249761343778063196.7050624658331465362. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039193622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:01:52 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m22s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 20m                  kube-proxy       
	  Warning  CgroupV1                 20m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientMemory  20m                  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     20m                  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    20m                  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           20m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 6m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m9s (x8 over 6m9s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m31s                node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:01:57 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 3m39s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   RegisteredNode           19m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 6m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m5s (x8 over 6m6s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s (x8 over 6m6s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s (x8 over 6m6s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m6s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m31s                node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:58:46 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-k9bh8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kindnet-8zbzj               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-proxy-4tffm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 3m57s                  kube-proxy       
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x3 over 17m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             13m                    node-controller  Node ha-791576-m04 status is now: NodeNotReady
	  Normal   Starting                 4m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m15s (x8 over 4m18s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m15s (x8 over 4m18s)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m15s (x8 over 4m18s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	
	
	==> dmesg <==
	[Dec 2 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:55] overlayfs: idmapped layers are currently not supported
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29] <==
	{"level":"info","ts":"2025-12-02T19:57:37.289125Z","caller":"traceutil/trace.go:172","msg":"trace[1743455241] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:3243; }","duration":"2.474069251s","start":"2025-12-02T19:57:34.815049Z","end":"2025-12-02T19:57:37.289118Z","steps":["trace[1743455241] 'agreement among raft nodes before linearized reading'  (duration: 2.474052366s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289212Z","caller":"traceutil/trace.go:172","msg":"trace[721646064] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"2.763305582s","start":"2025-12-02T19:57:34.525901Z","end":"2025-12-02T19:57:37.289207Z","steps":["trace[721646064] 'agreement among raft nodes before linearized reading'  (duration: 2.763290682s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289327Z","caller":"traceutil/trace.go:172","msg":"trace[893577248] range","detail":"{range_begin:/registry/minions/ha-791576-m02; range_end:; response_count:1; response_revision:3243; }","duration":"3.81972494s","start":"2025-12-02T19:57:33.469598Z","end":"2025-12-02T19:57:37.289323Z","steps":["trace[893577248] 'agreement among raft nodes before linearized reading'  (duration: 3.819681109s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289420Z","caller":"traceutil/trace.go:172","msg":"trace[190971960] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:3243; }","duration":"3.852850225s","start":"2025-12-02T19:57:33.436565Z","end":"2025-12-02T19:57:37.289415Z","steps":["trace[190971960] 'agreement among raft nodes before linearized reading'  (duration: 3.852832453s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289533Z","caller":"traceutil/trace.go:172","msg":"trace[248971072] range","detail":"{range_begin:/registry/minions/ha-791576; range_end:; response_count:1; response_revision:3243; }","duration":"4.111340804s","start":"2025-12-02T19:57:33.178187Z","end":"2025-12-02T19:57:37.289528Z","steps":["trace[248971072] 'agreement among raft nodes before linearized reading'  (duration: 4.111300945s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305093Z","caller":"traceutil/trace.go:172","msg":"trace[297507126] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:3243; }","duration":"4.463571378s","start":"2025-12-02T19:57:32.841509Z","end":"2025-12-02T19:57:37.305080Z","steps":["trace[297507126] 'agreement among raft nodes before linearized reading'  (duration: 4.463498813s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305320Z","caller":"traceutil/trace.go:172","msg":"trace[595455530] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:3243; }","duration":"4.464632101s","start":"2025-12-02T19:57:32.840683Z","end":"2025-12-02T19:57:37.305315Z","steps":["trace[595455530] 'agreement among raft nodes before linearized reading'  (duration: 4.464565141s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305395Z","caller":"traceutil/trace.go:172","msg":"trace[375887267] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:3243; }","duration":"4.464720362s","start":"2025-12-02T19:57:32.840668Z","end":"2025-12-02T19:57:37.305388Z","steps":["trace[375887267] 'agreement among raft nodes before linearized reading'  (duration: 4.464704994s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305515Z","caller":"traceutil/trace.go:172","msg":"trace[461441867] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:3243; }","duration":"4.464883911s","start":"2025-12-02T19:57:32.840626Z","end":"2025-12-02T19:57:37.305510Z","steps":["trace[461441867] 'agreement among raft nodes before linearized reading'  (duration: 4.464840827s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305579Z","caller":"traceutil/trace.go:172","msg":"trace[59432717] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:3243; }","duration":"4.46501921s","start":"2025-12-02T19:57:32.840556Z","end":"2025-12-02T19:57:37.305575Z","steps":["trace[59432717] 'agreement among raft nodes before linearized reading'  (duration: 4.465005344s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305697Z","caller":"traceutil/trace.go:172","msg":"trace[1458863396] range","detail":"{range_begin:/registry/leases; range_end:; response_count:0; response_revision:3243; }","duration":"4.465158422s","start":"2025-12-02T19:57:32.840534Z","end":"2025-12-02T19:57:37.305692Z","steps":["trace[1458863396] 'agreement among raft nodes before linearized reading'  (duration: 4.46513325s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305800Z","caller":"traceutil/trace.go:172","msg":"trace[1000282895] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:3243; }","duration":"4.465276582s","start":"2025-12-02T19:57:32.840519Z","end":"2025-12-02T19:57:37.305795Z","steps":["trace[1000282895] 'agreement among raft nodes before linearized reading'  (duration: 4.465257522s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305791Z","caller":"traceutil/trace.go:172","msg":"trace[1507459937] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:3243; }","duration":"4.462075152s","start":"2025-12-02T19:57:32.843708Z","end":"2025-12-02T19:57:37.305783Z","steps":["trace[1507459937] 'agreement among raft nodes before linearized reading'  (duration: 4.462030862s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305902Z","caller":"traceutil/trace.go:172","msg":"trace[1236842159] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:3243; }","duration":"4.465397539s","start":"2025-12-02T19:57:32.840500Z","end":"2025-12-02T19:57:37.305898Z","steps":["trace[1236842159] 'agreement among raft nodes before linearized reading'  (duration: 4.465372333s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305984Z","caller":"traceutil/trace.go:172","msg":"trace[98205234] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"4.465496416s","start":"2025-12-02T19:57:32.840483Z","end":"2025-12-02T19:57:37.305980Z","steps":["trace[98205234] 'agreement among raft nodes before linearized reading'  (duration: 4.465480556s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.305983Z","caller":"traceutil/trace.go:172","msg":"trace[651506030] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:3243; }","duration":"4.463451594s","start":"2025-12-02T19:57:32.842526Z","end":"2025-12-02T19:57:37.305977Z","steps":["trace[651506030] 'agreement among raft nodes before linearized reading'  (duration: 4.463413179s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306057Z","caller":"traceutil/trace.go:172","msg":"trace[975673522] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:3243; }","duration":"4.465585932s","start":"2025-12-02T19:57:32.840467Z","end":"2025-12-02T19:57:37.306053Z","steps":["trace[975673522] 'agreement among raft nodes before linearized reading'  (duration: 4.46557223s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306131Z","caller":"traceutil/trace.go:172","msg":"trace[1518714069] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:3243; }","duration":"4.465675768s","start":"2025-12-02T19:57:32.840451Z","end":"2025-12-02T19:57:37.306127Z","steps":["trace[1518714069] 'agreement among raft nodes before linearized reading'  (duration: 4.465661179s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306144Z","caller":"traceutil/trace.go:172","msg":"trace[1421790493] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:3243; }","duration":"4.46438218s","start":"2025-12-02T19:57:32.841756Z","end":"2025-12-02T19:57:37.306138Z","steps":["trace[1421790493] 'agreement among raft nodes before linearized reading'  (duration: 4.464311749s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306208Z","caller":"traceutil/trace.go:172","msg":"trace[1547210265] range","detail":"{range_begin:/registry/validatingwebhookconfigurations; range_end:; response_count:0; response_revision:3243; }","duration":"4.465771084s","start":"2025-12-02T19:57:32.840433Z","end":"2025-12-02T19:57:37.306204Z","steps":["trace[1547210265] 'agreement among raft nodes before linearized reading'  (duration: 4.465752828s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306205Z","caller":"traceutil/trace.go:172","msg":"trace[249476617] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:3243; }","duration":"4.464640799s","start":"2025-12-02T19:57:32.841560Z","end":"2025-12-02T19:57:37.306200Z","steps":["trace[249476617] 'agreement among raft nodes before linearized reading'  (duration: 4.464625038s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306287Z","caller":"traceutil/trace.go:172","msg":"trace[1206716498] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:3243; }","duration":"4.465871217s","start":"2025-12-02T19:57:32.840411Z","end":"2025-12-02T19:57:37.306283Z","steps":["trace[1206716498] 'agreement among raft nodes before linearized reading'  (duration: 4.465856891s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.306312Z","caller":"traceutil/trace.go:172","msg":"trace[602901791] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:3243; }","duration":"4.464761461s","start":"2025-12-02T19:57:32.841544Z","end":"2025-12-02T19:57:37.306306Z","steps":["trace[602901791] 'agreement among raft nodes before linearized reading'  (duration: 4.464743147s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.314721Z","caller":"traceutil/trace.go:172","msg":"trace[750533836] transaction","detail":"{read_only:false; response_revision:3244; number_of_response:1; }","duration":"3.394937822s","start":"2025-12-02T19:57:33.919770Z","end":"2025-12-02T19:57:37.314708Z","steps":["trace[750533836] 'process raft request'  (duration: 3.394760735s)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:57:37.289644Z","caller":"traceutil/trace.go:172","msg":"trace[1648650429] range","detail":"{range_begin:/registry/leases/kube-node-lease/ha-791576; range_end:; response_count:1; response_revision:3243; }","duration":"4.146672746s","start":"2025-12-02T19:57:33.142966Z","end":"2025-12-02T19:57:37.289639Z","steps":["trace[1648650429] 'agreement among raft nodes before linearized reading'  (duration: 4.146635356s)"],"step_count":1}
	
	
	==> kernel <==
	 20:02:01 up  1:44,  0 user,  load average: 0.88, 1.44, 1.41
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02e772d860e77006ec0b051223b10e67de2ed41ecc1b18874de331cdb32bd1a6] <==
	I1202 20:01:18.202125       1 main.go:301] handling current node
	I1202 20:01:28.201536       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:28.201568       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:28.201891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:28.201909       1 main.go:301] handling current node
	I1202 20:01:28.201922       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:28.201928       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:38.201065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:38.201205       1 main.go:301] handling current node
	I1202 20:01:38.201245       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:38.201290       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:38.201567       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:38.201636       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:48.205486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:48.205520       1 main.go:301] handling current node
	I1202 20:01:48.205536       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:48.205541       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:48.205735       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:48.205749       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:01:58.201490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:01:58.201522       1 main.go:301] handling current node
	I1202 20:01:58.201538       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:01:58.201543       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:01:58.201709       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:01:58.201735       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9] <==
	{"level":"warn","ts":"2025-12-02T19:57:37.266812Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.266832Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d01680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274392Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a21a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274785Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025223c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274836Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001283860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274869Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001e212c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274899Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274921Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002889680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274966Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f383c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274993Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275010Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023a65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275027Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f394a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275097Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400248da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275220Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028890e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279316Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028541e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1202 19:57:37.337298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	{"level":"warn","ts":"2025-12-02T19:57:38.096782Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1202 19:57:38.096878       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.128576061s, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	I1202 19:57:40.624545       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1202 19:57:40.936228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:58:29.433907       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:58:31.983629       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:58:32.004810       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4] <==
	E1202 19:59:09.180866       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180895       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180903       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180909       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180913       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180918       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180924       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	I1202 19:59:09.200950       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233282       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233391       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267544       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267590       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.304785       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.305077       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339802       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339845       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388801       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388937       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.431739       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.432083       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469146       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469262       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512224       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512321       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	I1202 19:59:09.551464       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	
	
	==> kube-controller-manager [ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b] <==
	I1202 19:57:21.480081       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:57:22.307047       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:57:22.307083       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:22.308866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:57:22.309043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:57:22.309144       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:57:22.309457       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1202 19:57:37.311326       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [355934c2fc92908a3d014373a10e2ad38fde6cd637a204a613dd4cf27e58d5de] <==
	I1202 19:57:38.434579       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:57:38.599480       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:57:38.700098       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:57:38.700208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:57:38.700313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:57:38.806652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:57:38.806864       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:57:38.840406       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:57:38.840778       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:57:38.840994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:38.842280       1 config.go:200] "Starting service config controller"
	I1202 19:57:38.842343       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:57:38.842391       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:57:38.842435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:57:38.842472       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:57:38.842507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:57:38.849620       1 config.go:309] "Starting node config controller"
	I1202 19:57:38.849733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:57:38.849766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:57:38.946880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:57:38.946930       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 19:57:38.946999       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068] <==
	I1202 19:55:55.231666       1 serving.go:386] Generated self-signed cert in-memory
	W1202 19:56:01.322035       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 19:56:01.322158       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 19:56:01.322194       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 19:56:01.322238       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 19:56:01.414510       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 19:56:01.414609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:56:01.445556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445721       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:56:01.446001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:56:01.545921       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1202 19:58:29.288494       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	E1202 19:58:29.288769       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4eb2efb8-62a6-4a52-bafd-ddc9837ef293(default/busybox-7b57f96db7-k9bh8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	E1202 19:58:29.288838       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	I1202 19:58:29.290780       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	
	
	==> kubelet <==
	Dec 02 19:57:23 ha-791576 kubelet[806]: E1202 19:57:23.174754     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-791576\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-791576/status?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:32 ha-791576 kubelet[806]: E1202 19:57:32.339488     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-791576?timeout=10s\": context deadline exceeded" interval="800ms"
	Dec 02 19:57:33 ha-791576 kubelet[806]: E1202 19:57:33.176339     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-791576\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-791576?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:35 ha-791576 kubelet[806]: E1202 19:57:35.968777     806 kubelet.go:3222] "Failed creating a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:57:35 ha-791576 kubelet[806]: I1202 19:57:35.968822     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.477293     806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.550540     806 scope.go:117] "RemoveContainer" containerID="1481b78f0b49db2c5b77d1f4b1a48f1606d7b5b7efc574d9920be0dcf7d60944"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.551052     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:37 ha-791576 kubelet[806]: E1202 19:57:37.551183     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:38 ha-791576 kubelet[806]: W1202 19:57:38.005619     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c WatchSource:0}: Error finding container 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c: Status 404 returned error can't find the container with id 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.163483     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-791576\" already exists" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.163520     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.241716     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-791576\" already exists" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.576730     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.577312     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:45 ha-791576 kubelet[806]: I1202 19:57:45.547133     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:45 ha-791576 kubelet[806]: E1202 19:57:45.547777     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.235433     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log: no such file or directory
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.237620     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log: no such file or directory
	Dec 02 19:57:58 ha-791576 kubelet[806]: I1202 19:57:58.228513     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:58 ha-791576 kubelet[806]: E1202 19:57:58.229379     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:08 ha-791576 kubelet[806]: I1202 19:58:08.659780     806 scope.go:117] "RemoveContainer" containerID="5c0daa7c8d4e1a9a2a77b1849e4249d4f9f28faa84c47fbc750bdf4924430591"
	Dec 02 19:58:11 ha-791576 kubelet[806]: I1202 19:58:11.230446     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:58:11 ha-791576 kubelet[806]: E1202 19:58:11.230623     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:26 ha-791576 kubelet[806]: I1202 19:58:26.228365     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.022972715s)
ha_test.go:309: expected profile "ha-791576" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-791576\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-791576\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-791576\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-791576
helpers_test.go:243: (dbg) docker inspect ha-791576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	        "Created": "2025-12-02T19:40:54.919017186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T19:55:44.458015606Z",
	            "FinishedAt": "2025-12-02T19:55:43.73005975Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hostname",
	        "HostsPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/hosts",
	        "LogPath": "/var/lib/docker/containers/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94-json.log",
	        "Name": "/ha-791576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-791576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-791576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94",
	                "LowerDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcff5221eb6d5891f9a11c2319668d459a75dcb80d386900a1da75ab9b9edee5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-791576",
	                "Source": "/var/lib/docker/volumes/ha-791576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-791576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-791576",
	                "name.minikube.sigs.k8s.io": "ha-791576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "751177d5ee464382bdbbbb72de4fb526573054bfa543b68ed932cd0c1d287957",
	            "SandboxKey": "/var/run/docker/netns/751177d5ee46",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-791576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:0b:05:fd:a7:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56dad1208e3b87b69e94173604d284ae0e7c0f0097a9b4d2483c8eb74a9ccc65",
	                    "EndpointID": "f86c1b624622b29b058cdcb9ce2cd5d942bc8d95518744c77b2a01273b6d217e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-791576",
	                        "f426f8269bd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-791576 -n ha-791576
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 logs -n 25: (1.869454885s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt                                                             │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt                                                 │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m02 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ cp      │ ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt               │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ ssh     │ ha-791576 ssh -n ha-791576-m03 sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:45 UTC │
	│ node    │ ha-791576 node start m02 --alsologtostderr -v 5                                                                                      │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:45 UTC │ 02 Dec 25 19:46 UTC │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │ 02 Dec 25 19:46 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5                                                                                   │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:46 UTC │                     │
	│ node    │ ha-791576 node list --alsologtostderr -v 5                                                                                           │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ stop    │ ha-791576 stop --alsologtostderr -v 5                                                                                                │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │ 02 Dec 25 19:55 UTC │
	│ start   │ ha-791576 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 19:55 UTC │                     │
	│ node    │ ha-791576 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-791576 │ jenkins │ v1.37.0 │ 02 Dec 25 20:02 UTC │ 02 Dec 25 20:03 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:55:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:55:44.177967   93254 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:44.178109   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178122   93254 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:44.178128   93254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.178419   93254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:44.178766   93254 out.go:368] Setting JSON to false
	I1202 19:55:44.179556   93254 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5883,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:55:44.179622   93254 start.go:143] virtualization:  
	I1202 19:55:44.182617   93254 out.go:179] * [ha-791576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:55:44.186436   93254 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:55:44.186585   93254 notify.go:221] Checking for updates...
	I1202 19:55:44.192062   93254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:55:44.194974   93254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:44.197803   93254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:55:44.200682   93254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:55:44.203721   93254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:55:44.206951   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:44.207525   93254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:55:44.231700   93254 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:55:44.231811   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.301596   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.287047316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.301733   93254 docker.go:319] overlay module found
	I1202 19:55:44.304924   93254 out.go:179] * Using the docker driver based on existing profile
	I1202 19:55:44.307862   93254 start.go:309] selected driver: docker
	I1202 19:55:44.307884   93254 start.go:927] validating driver "docker" against &{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kube
flow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.308026   93254 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:55:44.308131   93254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:55:44.371573   93254 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-02 19:55:44.362799023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:55:44.372011   93254 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:55:44.372042   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:44.372097   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:44.372154   93254 start.go:353] cluster config:
	{Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:44.377185   93254 out.go:179] * Starting "ha-791576" primary control-plane node in "ha-791576" cluster
	I1202 19:55:44.379977   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:44.382846   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:44.385821   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:44.385879   93254 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 19:55:44.385893   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:44.385993   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:44.386008   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:44.386151   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.386369   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:44.405321   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:44.405352   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:44.405373   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:44.405404   93254 start.go:360] acquireMachinesLock for ha-791576: {Name:mkf0dd990410be65dae2a66c72db1ebd52e2b0ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:44.405469   93254 start.go:364] duration metric: took 41.304µs to acquireMachinesLock for "ha-791576"
	I1202 19:55:44.405492   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:44.405502   93254 fix.go:54] fixHost starting: 
	I1202 19:55:44.405802   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.422067   93254 fix.go:112] recreateIfNeeded on ha-791576: state=Stopped err=<nil>
	W1202 19:55:44.422096   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:44.425385   93254 out.go:252] * Restarting existing docker container for "ha-791576" ...
	I1202 19:55:44.425482   93254 cli_runner.go:164] Run: docker start ha-791576
	I1202 19:55:44.656773   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.678497   93254 kic.go:430] container "ha-791576" state is running.
	I1202 19:55:44.678860   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:44.708256   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:44.708493   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:44.708552   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:44.731511   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:44.731837   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:44.731849   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:44.733165   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:47.885197   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:47.885250   93254 ubuntu.go:182] provisioning hostname "ha-791576"
	I1202 19:55:47.885314   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:47.903491   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:47.903813   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:47.903827   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576 && echo "ha-791576" | sudo tee /etc/hostname
	I1202 19:55:48.069176   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576
	
	I1202 19:55:48.069254   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.089514   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.089877   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.089901   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:48.242008   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:48.242032   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:48.242057   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:48.242069   93254 provision.go:84] configureAuth start
	I1202 19:55:48.242132   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:48.261821   93254 provision.go:143] copyHostCerts
	I1202 19:55:48.261871   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.261931   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:48.261951   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:48.262038   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:48.262141   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262166   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:48.262174   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:48.262211   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:48.262289   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262314   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:48.262323   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:48.262355   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:48.262435   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576 san=[127.0.0.1 192.168.49.2 ha-791576 localhost minikube]
	I1202 19:55:48.452060   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:48.452139   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:48.452177   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.470613   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:48.573192   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:48.573250   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:48.589521   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:48.589763   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 19:55:48.606218   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:48.606297   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 19:55:48.623387   93254 provision.go:87] duration metric: took 381.29482ms to configureAuth
	I1202 19:55:48.623419   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:48.623653   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:48.623765   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:48.640254   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:48.640566   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1202 19:55:48.640586   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:49.030725   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:49.030745   93254 machine.go:97] duration metric: took 4.32224289s to provisionDockerMachine
	I1202 19:55:49.030757   93254 start.go:293] postStartSetup for "ha-791576" (driver="docker")
	I1202 19:55:49.030768   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:49.030827   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:49.030865   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.051519   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.153353   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:49.156583   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:49.156607   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:49.156618   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:49.156674   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:49.156758   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:49.156764   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:49.156861   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:49.164042   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:49.180380   93254 start.go:296] duration metric: took 149.593959ms for postStartSetup
	I1202 19:55:49.180465   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:49.180519   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.197329   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.298832   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:49.303554   93254 fix.go:56] duration metric: took 4.898044691s for fixHost
	I1202 19:55:49.303578   93254 start.go:83] releasing machines lock for "ha-791576", held for 4.898097178s
	I1202 19:55:49.303651   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:55:49.320407   93254 ssh_runner.go:195] Run: cat /version.json
	I1202 19:55:49.320456   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.320470   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:49.320533   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:55:49.338342   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.345505   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:55:49.524252   93254 ssh_runner.go:195] Run: systemctl --version
	I1202 19:55:49.530647   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:49.565296   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:49.569498   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:49.569577   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:49.577094   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:49.577167   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:49.577205   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:49.577256   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:49.592079   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:49.605549   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:49.605621   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:49.621023   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:49.635753   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:49.750982   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:55:49.859462   93254 docker.go:234] disabling docker service ...
	I1202 19:55:49.859565   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:55:49.874667   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:55:49.887012   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:55:50.007847   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:55:50.134338   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:55:50.146986   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:55:50.161229   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:55:50.161317   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.170383   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:55:50.170453   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.179542   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.188652   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.197399   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:55:50.205856   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.214897   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.223103   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:55:50.231783   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:55:50.238878   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:55:50.245749   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:50.382453   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:55:50.564448   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:55:50.564526   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:55:50.568176   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:55:50.568235   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:55:50.571563   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:55:50.595656   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:55:50.595739   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.625390   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:55:50.655103   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:55:50.658061   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:55:50.674479   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:55:50.678575   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.688260   93254 kubeadm.go:884] updating cluster {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:55:50.688998   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:50.689083   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.726565   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.726626   93254 crio.go:433] Images already preloaded, skipping extraction
	I1202 19:55:50.726708   93254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:55:50.756058   93254 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:55:50.756081   93254 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:55:50.756091   93254 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1202 19:55:50.756189   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:55:50.756269   93254 ssh_runner.go:195] Run: crio config
	I1202 19:55:50.831624   93254 cni.go:84] Creating CNI manager for ""
	I1202 19:55:50.831657   93254 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 19:55:50.831710   93254 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:55:50.831742   93254 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-791576 NodeName:ha-791576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:55:50.831887   93254 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-791576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:55:50.831904   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:55:50.831959   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:55:50.843196   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:50.843290   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:55:50.843354   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:55:50.850587   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:55:50.850656   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 19:55:50.857765   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 19:55:50.869276   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:55:50.881241   93254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1202 19:55:50.893240   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:55:50.905823   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:55:50.909303   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:55:50.918750   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:55:51.026144   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:55:51.042322   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.2
	I1202 19:55:51.042383   93254 certs.go:195] generating shared ca certs ...
	I1202 19:55:51.042413   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.042572   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:55:51.042673   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:55:51.042696   93254 certs.go:257] generating profile certs ...
	I1202 19:55:51.042790   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:55:51.042844   93254 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f
	I1202 19:55:51.042883   93254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1202 19:55:51.207706   93254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f ...
	I1202 19:55:51.207774   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f: {Name:mk0befc0b318cce17722eedc60197d074ef72403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208003   93254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f ...
	I1202 19:55:51.208041   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f: {Name:mk6747dc6a0e6b21e4d9bc0a0b21cc4e1f72108f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:51.208176   93254 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt
	I1202 19:55:51.208351   93254 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.de042c5f -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key
	I1202 19:55:51.208521   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:55:51.208562   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:55:51.208598   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:55:51.208631   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:55:51.208669   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:55:51.208699   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:55:51.208731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:55:51.208772   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:55:51.208803   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:55:51.208876   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:55:51.208937   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:55:51.208962   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:55:51.209012   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:55:51.209063   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:55:51.209110   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:55:51.209189   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:51.209271   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.209343   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.209384   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.210038   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:55:51.231782   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:55:51.250385   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:55:51.267781   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:55:51.286345   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:55:51.304523   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:55:51.322173   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:55:51.340727   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:55:51.358222   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:55:51.376555   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:55:51.392531   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:55:51.409238   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:55:51.421079   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:55:51.427316   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:55:51.435537   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.438995   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.439062   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:55:51.479993   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:55:51.487626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:55:51.495524   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.499393   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:55:51.539899   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:55:51.548401   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:55:51.556378   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559859   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.559918   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:55:51.600611   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:55:51.608321   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:55:51.611874   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:55:51.656450   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:55:51.699650   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:55:51.748675   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:55:51.798307   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:55:51.891003   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:55:51.960070   93254 kubeadm.go:401] StartCluster: {Name:ha-791576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:55:51.960253   93254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:55:51.960360   93254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:55:52.020134   93254 cri.go:89] found id: "7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068"
	I1202 19:55:52.020208   93254 cri.go:89] found id: "53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9"
	I1202 19:55:52.020237   93254 cri.go:89] found id: "9e7e710fc30aaba995500f37ffa3972d03427ad4b5096ea5e3f635761be6fe1e"
	I1202 19:55:52.020256   93254 cri.go:89] found id: "b0964e2af680e31e59bc41f16955d47d76026029392b1597b247a7226618e258"
	I1202 19:55:52.020292   93254 cri.go:89] found id: "935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29"
	I1202 19:55:52.020316   93254 cri.go:89] found id: ""
	I1202 19:55:52.020420   93254 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 19:55:52.039471   93254 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T19:55:52Z" level=error msg="open /run/runc: no such file or directory"
	I1202 19:55:52.039648   93254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:55:52.052041   93254 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 19:55:52.052113   93254 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 19:55:52.052202   93254 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 19:55:52.067291   93254 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:55:52.067793   93254 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-791576" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.067946   93254 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "ha-791576" cluster setting kubeconfig missing "ha-791576" context setting]
	I1202 19:55:52.068355   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.069044   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:55:52.069935   93254 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 19:55:52.070037   93254 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 19:55:52.070083   93254 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 19:55:52.070105   93254 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 19:55:52.070125   93254 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 19:55:52.070010   93254 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 19:55:52.070578   93254 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 19:55:52.089251   93254 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 19:55:52.089343   93254 kubeadm.go:602] duration metric: took 37.210796ms to restartPrimaryControlPlane
	I1202 19:55:52.089369   93254 kubeadm.go:403] duration metric: took 129.308895ms to StartCluster
	I1202 19:55:52.089422   93254 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.089527   93254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:55:52.090263   93254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:55:52.090544   93254 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:55:52.090598   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:55:52.090630   93254 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 19:55:52.091558   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.096453   93254 out.go:179] * Enabled addons: 
	I1202 19:55:52.099512   93254 addons.go:530] duration metric: took 8.877075ms for enable addons: enabled=[]
	I1202 19:55:52.099607   93254 start.go:247] waiting for cluster config update ...
	I1202 19:55:52.099630   93254 start.go:256] writing updated cluster config ...
	I1202 19:55:52.102945   93254 out.go:203] 
	I1202 19:55:52.106144   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:52.106258   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.109518   93254 out.go:179] * Starting "ha-791576-m02" control-plane node in "ha-791576" cluster
	I1202 19:55:52.112289   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:55:52.115487   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:55:52.118244   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:55:52.118264   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:55:52.118378   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:55:52.118387   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:55:52.118504   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.118707   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:55:52.150292   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:55:52.150314   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:55:52.150328   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:55:52.150350   93254 start.go:360] acquireMachinesLock for ha-791576-m02: {Name:mka8905b718126ef1af03281337b5ea61f190248 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:55:52.150401   93254 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "ha-791576-m02"
	I1202 19:55:52.150419   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:55:52.150424   93254 fix.go:54] fixHost starting: m02
	I1202 19:55:52.150685   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.190695   93254 fix.go:112] recreateIfNeeded on ha-791576-m02: state=Stopped err=<nil>
	W1202 19:55:52.190719   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:55:52.194176   93254 out.go:252] * Restarting existing docker container for "ha-791576-m02" ...
	I1202 19:55:52.194252   93254 cli_runner.go:164] Run: docker start ha-791576-m02
	I1202 19:55:52.599976   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:52.629412   93254 kic.go:430] container "ha-791576-m02" state is running.
	I1202 19:55:52.629885   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:52.664048   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:55:52.664285   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:55:52.664350   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:52.688321   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:52.688636   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:52.688648   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:55:52.689286   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:55:55.971095   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:55.971155   93254 ubuntu.go:182] provisioning hostname "ha-791576-m02"
	I1202 19:55:55.971238   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:55.998825   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:55.999132   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:55.999149   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m02 && echo "ha-791576-m02" | sudo tee /etc/hostname
	I1202 19:55:56.285260   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m02
	
	I1202 19:55:56.285380   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:56.324784   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:56.325097   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:56.325112   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:55:56.574478   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:55:56.574546   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:55:56.574578   93254 ubuntu.go:190] setting up certificates
	I1202 19:55:56.574609   93254 provision.go:84] configureAuth start
	I1202 19:55:56.574702   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:56.605527   93254 provision.go:143] copyHostCerts
	I1202 19:55:56.605564   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605607   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:55:56.605617   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:55:56.605764   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:55:56.605858   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605875   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:55:56.605880   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:55:56.605907   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:55:56.605945   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605961   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:55:56.605965   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:55:56.605988   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:55:56.606032   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m02 san=[127.0.0.1 192.168.49.3 ha-791576-m02 localhost minikube]
	I1202 19:55:57.020409   93254 provision.go:177] copyRemoteCerts
	I1202 19:55:57.020550   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:55:57.020628   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.038510   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:57.153644   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:55:57.153716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:55:57.184300   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:55:57.184359   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:55:57.266970   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:55:57.267064   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:55:57.331675   93254 provision.go:87] duration metric: took 757.029391ms to configureAuth
	I1202 19:55:57.331740   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:55:57.331983   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:57.332101   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:57.363340   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:55:57.363649   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1202 19:55:57.363662   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:55:58.504594   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:55:58.504673   93254 machine.go:97] duration metric: took 5.840377716s to provisionDockerMachine
	I1202 19:55:58.504698   93254 start.go:293] postStartSetup for "ha-791576-m02" (driver="docker")
	I1202 19:55:58.504722   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:55:58.504818   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:55:58.504881   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.552759   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.683948   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:55:58.687504   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:55:58.687528   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:55:58.687538   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:55:58.687590   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:55:58.687661   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:55:58.687667   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:55:58.687766   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:55:58.696105   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:55:58.729078   93254 start.go:296] duration metric: took 224.353376ms for postStartSetup
	I1202 19:55:58.729200   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:55:58.729258   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.748281   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.865403   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:55:58.871596   93254 fix.go:56] duration metric: took 6.721165168s for fixHost
	I1202 19:55:58.871617   93254 start.go:83] releasing machines lock for "ha-791576-m02", held for 6.7212084s
	I1202 19:55:58.871682   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m02
	I1202 19:55:58.902526   93254 out.go:179] * Found network options:
	I1202 19:55:58.905433   93254 out.go:179]   - NO_PROXY=192.168.49.2
	W1202 19:55:58.908359   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:55:58.908394   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:55:58.908458   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:55:58.908500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.908758   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:55:58.908808   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m02
	I1202 19:55:58.941876   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:58.957861   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m02/id_rsa Username:docker}
	I1202 19:55:59.379469   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:55:59.393428   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:55:59.393549   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:55:59.436981   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:55:59.437054   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:55:59.437109   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:55:59.437185   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:55:59.476789   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:55:59.492965   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:55:59.493030   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:55:59.510203   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:55:59.535902   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:55:59.890794   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:56:00.391688   93254 docker.go:234] disabling docker service ...
	I1202 19:56:00.391868   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:56:00.454884   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:56:00.506073   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:56:00.797340   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:56:01.166082   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:56:01.219009   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:56:01.256352   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:56:01.256455   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.307607   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:56:01.307708   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.346124   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.369272   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.393260   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:56:01.408865   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.438945   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.451063   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:56:01.488074   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:56:01.499136   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:56:01.507846   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:56:01.747608   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:32.000346   93254 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.252704452s)
	I1202 19:57:32.000372   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:32.000423   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:32.004239   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:32.004296   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:32.007869   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:32.036443   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:32.036523   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.065233   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:32.100050   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:32.103063   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:32.106043   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:32.121822   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:32.126366   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:32.138121   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:32.138366   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:32.138687   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:32.155548   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:32.155827   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.3
	I1202 19:57:32.155834   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:32.155849   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:32.155961   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:32.156000   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:32.156007   93254 certs.go:257] generating profile certs ...
	I1202 19:57:32.156076   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key
	I1202 19:57:32.156141   93254 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key.8b416d14
	I1202 19:57:32.156181   93254 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key
	I1202 19:57:32.156189   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:32.156201   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:32.156212   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:32.156222   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:32.156232   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 19:57:32.156243   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 19:57:32.156253   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 19:57:32.156264   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 19:57:32.156310   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:32.156339   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:32.156347   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:32.156372   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:32.156396   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:32.156422   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:32.156466   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:32.156496   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.156509   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.156520   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.156574   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:57:32.173330   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:57:32.269964   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 19:57:32.273629   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 19:57:32.281594   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 19:57:32.284955   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 19:57:32.292668   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 19:57:32.296257   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 19:57:32.304405   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 19:57:32.307845   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1202 19:57:32.316416   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 19:57:32.319715   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 19:57:32.331425   93254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 19:57:32.335418   93254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 19:57:32.345158   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:32.362660   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:32.381060   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:32.399011   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:32.417547   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 19:57:32.436697   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:57:32.454716   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:57:32.472049   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:57:32.488952   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:32.507493   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:32.525119   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:32.543594   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 19:57:32.556208   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 19:57:32.568883   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 19:57:32.582212   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1202 19:57:32.594098   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 19:57:32.606261   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 19:57:32.618196   93254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 19:57:32.631378   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:32.637197   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:32.645952   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.649933   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.650038   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:32.692551   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:32.700398   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:32.708435   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.711984   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.712047   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:32.752921   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:32.760626   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:32.768641   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772345   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.772443   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:32.817730   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:32.825349   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:32.829063   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 19:57:32.869702   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 19:57:32.910289   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 19:57:32.951408   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 19:57:32.991818   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 19:57:33.032586   93254 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 19:57:33.073299   93254 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1202 19:57:33.073392   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:33.073421   93254 kube-vip.go:115] generating kube-vip config ...
	I1202 19:57:33.073489   93254 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 19:57:33.084964   93254 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 19:57:33.085019   93254 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 19:57:33.085079   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:33.092389   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:33.092504   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 19:57:33.099839   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:33.111954   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:33.124537   93254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 19:57:33.139421   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:33.144249   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:33.154311   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.286984   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.300875   93254 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:57:33.301346   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:33.304919   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:33.307970   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:33.441136   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:33.455239   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:33.455306   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:33.455557   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330869   93254 node_ready.go:49] node "ha-791576-m02" is "Ready"
	I1202 19:57:37.330905   93254 node_ready.go:38] duration metric: took 3.875318836s for node "ha-791576-m02" to be "Ready" ...
	I1202 19:57:37.330920   93254 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:57:37.330980   93254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:57:37.350335   93254 api_server.go:72] duration metric: took 4.049370544s to wait for apiserver process to appear ...
	I1202 19:57:37.350361   93254 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:57:37.350381   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.437921   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.437997   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:37.850509   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:37.877801   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:37.877836   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.351486   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.375050   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.375085   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:38.850665   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:38.878543   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:38.878572   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.351038   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.378413   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.378441   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:39.850846   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:39.864441   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:39.864468   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.350812   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.361521   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 19:57:40.361559   93254 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 19:57:40.850824   93254 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 19:57:40.864753   93254 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 19:57:40.866306   93254 api_server.go:141] control plane version: v1.34.2
	I1202 19:57:40.866336   93254 api_server.go:131] duration metric: took 3.51596701s to wait for apiserver health ...
	I1202 19:57:40.866371   93254 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:57:40.881984   93254 system_pods.go:59] 26 kube-system pods found
	I1202 19:57:40.882074   93254 system_pods.go:61] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882090   93254 system_pods.go:61] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.882098   93254 system_pods.go:61] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.882107   93254 system_pods.go:61] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.882112   93254 system_pods.go:61] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.882116   93254 system_pods.go:61] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.882146   93254 system_pods.go:61] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.882164   93254 system_pods.go:61] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.882169   93254 system_pods.go:61] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.882175   93254 system_pods.go:61] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.882183   93254 system_pods.go:61] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.882192   93254 system_pods.go:61] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.882207   93254 system_pods.go:61] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.882228   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.882258   93254 system_pods.go:61] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.882267   93254 system_pods.go:61] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.882271   93254 system_pods.go:61] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.882280   93254 system_pods.go:61] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.882288   93254 system_pods.go:61] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.882291   93254 system_pods.go:61] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.882295   93254 system_pods.go:61] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.882298   93254 system_pods.go:61] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.882302   93254 system_pods.go:61] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.882306   93254 system_pods.go:61] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.882325   93254 system_pods.go:61] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.882337   93254 system_pods.go:61] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.882356   93254 system_pods.go:74] duration metric: took 15.961542ms to wait for pod list to return data ...
	I1202 19:57:40.882368   93254 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:57:40.886711   93254 default_sa.go:45] found service account: "default"
	I1202 19:57:40.886765   93254 default_sa.go:55] duration metric: took 4.377498ms for default service account to be created ...
	I1202 19:57:40.886816   93254 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:57:40.896351   93254 system_pods.go:86] 26 kube-system pods found
	I1202 19:57:40.896402   93254 system_pods.go:89] "coredns-66bc5c9577-hw99j" [41651b78-4f35-4858-82f9-9ce32d4640ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896455   93254 system_pods.go:89] "coredns-66bc5c9577-w2245" [4bdef149-d532-4281-af19-05cf48d46a88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:57:40.896471   93254 system_pods.go:89] "etcd-ha-791576" [2a176863-dd59-46a6-bdb3-aeb7c65b7d01] Running
	I1202 19:57:40.896477   93254 system_pods.go:89] "etcd-ha-791576-m02" [fa7ac9e6-3e5f-4ac2-8bd4-e8f0c1061879] Running
	I1202 19:57:40.896488   93254 system_pods.go:89] "etcd-ha-791576-m03" [18c027b9-6573-42b9-816b-b2c477200a20] Running
	I1202 19:57:40.896493   93254 system_pods.go:89] "kindnet-2pf27" [baa33ef5-db31-44da-a1a3-e0c07870b236] Running
	I1202 19:57:40.896517   93254 system_pods.go:89] "kindnet-8zbzj" [c84fe35d-4a19-4745-8aee-58de31866e88] Running
	I1202 19:57:40.896529   93254 system_pods.go:89] "kindnet-ksng5" [94c35a7a-0257-4861-a1c9-94a7ba59ffe5] Running
	I1202 19:57:40.896547   93254 system_pods.go:89] "kindnet-m2l5j" [a984b329-2638-49d7-98e3-0c21cfed28c6] Running
	I1202 19:57:40.896561   93254 system_pods.go:89] "kube-apiserver-ha-791576" [460f158f-0500-4b2e-b75a-222db2752a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:57:40.896567   93254 system_pods.go:89] "kube-apiserver-ha-791576-m02" [60c7258b-d26e-4b2a-afd4-cba6b17192b1] Running
	I1202 19:57:40.896577   93254 system_pods.go:89] "kube-apiserver-ha-791576-m03" [9d1bd17a-375c-47df-b660-7df929e07c21] Running
	I1202 19:57:40.896584   93254 system_pods.go:89] "kube-controller-manager-ha-791576" [446392f7-3112-49fe-abed-3a9e5de126a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:57:40.896589   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m02" [11e9db04-7e03-400f-8652-4f04b8a865a2] Running
	I1202 19:57:40.896594   93254 system_pods.go:89] "kube-controller-manager-ha-791576-m03" [37aced98-d2fd-4734-8013-768e54582975] Running
	I1202 19:57:40.896605   93254 system_pods.go:89] "kube-proxy-4tffm" [e6ccffca-ea01-42a1-b283-d48ebaaf0a2e] Running
	I1202 19:57:40.896635   93254 system_pods.go:89] "kube-proxy-dvt58" [30417a27-406d-4feb-84e2-3c143f2b99f5] Running
	I1202 19:57:40.896647   93254 system_pods.go:89] "kube-proxy-pjkt7" [2f48a929-ed6d-4816-89de-0d0c0906e695] Running
	I1202 19:57:40.896651   93254 system_pods.go:89] "kube-proxy-q5vfv" [011527c2-0bbf-4dd9-a775-7bbd1a8647a4] Running
	I1202 19:57:40.896655   93254 system_pods.go:89] "kube-scheduler-ha-791576" [82dccc4b-42d3-4faa-9841-6a6d645c0a33] Running
	I1202 19:57:40.896660   93254 system_pods.go:89] "kube-scheduler-ha-791576-m02" [f22fa891-6210-40d9-b64c-aaedddca9713] Running
	I1202 19:57:40.896669   93254 system_pods.go:89] "kube-scheduler-ha-791576-m03" [eec7c2c1-7809-497c-9629-842157df945d] Running
	I1202 19:57:40.896714   93254 system_pods.go:89] "kube-vip-ha-791576" [74c2aef6-8fae-41b1-8fa0-a251eaed2459] Running
	I1202 19:57:40.896731   93254 system_pods.go:89] "kube-vip-ha-791576-m02" [83556429-7359-474c-984e-b8ee1860d552] Running
	I1202 19:57:40.896736   93254 system_pods.go:89] "kube-vip-ha-791576-m03" [255c5d80-1a6b-41b0-8b35-e2462b1d4679] Running
	I1202 19:57:40.896740   93254 system_pods.go:89] "storage-provisioner" [7a2e34ca-2f88-457c-8898-9cfbab53ca55] Running
	I1202 19:57:40.896767   93254 system_pods.go:126] duration metric: took 9.944455ms to wait for k8s-apps to be running ...
	I1202 19:57:40.896779   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:40.896851   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:40.912940   93254 system_svc.go:56] duration metric: took 16.146284ms WaitForService to wait for kubelet
	I1202 19:57:40.912971   93254 kubeadm.go:587] duration metric: took 7.612010896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:40.913011   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:40.922663   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922709   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922747   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922761   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922765   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:40.922770   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:40.922782   93254 node_conditions.go:105] duration metric: took 9.75895ms to run NodePressure ...
	I1202 19:57:40.922797   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:40.922840   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:40.926963   93254 out.go:203] 
	I1202 19:57:40.930189   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:40.930349   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.933758   93254 out.go:179] * Starting "ha-791576-m04" worker node in "ha-791576" cluster
	I1202 19:57:40.937496   93254 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 19:57:40.940562   93254 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 19:57:40.944509   93254 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:57:40.944573   93254 cache.go:65] Caching tarball of preloaded images
	I1202 19:57:40.944591   93254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 19:57:40.944689   93254 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 19:57:40.944700   93254 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:57:40.944847   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:40.980485   93254 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 19:57:40.980503   93254 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 19:57:40.980516   93254 cache.go:243] Successfully downloaded all kic artifacts
	I1202 19:57:40.980539   93254 start.go:360] acquireMachinesLock for ha-791576-m04: {Name:mkf6d085e6ffaf9b8d3c89207d22561aa64cc068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:57:40.980591   93254 start.go:364] duration metric: took 37.824µs to acquireMachinesLock for "ha-791576-m04"
	I1202 19:57:40.980609   93254 start.go:96] Skipping create...Using existing machine configuration
	I1202 19:57:40.980616   93254 fix.go:54] fixHost starting: m04
	I1202 19:57:40.980868   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.009962   93254 fix.go:112] recreateIfNeeded on ha-791576-m04: state=Stopped err=<nil>
	W1202 19:57:41.009990   93254 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 19:57:41.013529   93254 out.go:252] * Restarting existing docker container for "ha-791576-m04" ...
	I1202 19:57:41.013708   93254 cli_runner.go:164] Run: docker start ha-791576-m04
	I1202 19:57:41.349696   93254 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:57:41.385329   93254 kic.go:430] container "ha-791576-m04" state is running.
	I1202 19:57:41.385673   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:41.416072   93254 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/config.json ...
	I1202 19:57:41.416305   93254 machine.go:94] provisionDockerMachine start ...
	I1202 19:57:41.416360   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:41.450379   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:41.450693   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:41.450702   93254 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:57:41.451334   93254 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 19:57:44.613206   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.613228   93254 ubuntu.go:182] provisioning hostname "ha-791576-m04"
	I1202 19:57:44.613296   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.632442   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.632744   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.632755   93254 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-791576-m04 && echo "ha-791576-m04" | sudo tee /etc/hostname
	I1202 19:57:44.799185   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-791576-m04
	
	I1202 19:57:44.799313   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:44.822391   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:44.822698   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:44.822720   93254 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-791576-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-791576-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-791576-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:57:44.979513   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:57:44.979597   93254 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 19:57:44.979629   93254 ubuntu.go:190] setting up certificates
	I1202 19:57:44.979671   93254 provision.go:84] configureAuth start
	I1202 19:57:44.979758   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:45.000651   93254 provision.go:143] copyHostCerts
	I1202 19:57:45.000689   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000721   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 19:57:45.000728   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 19:57:45.000802   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 19:57:45.001053   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001076   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 19:57:45.001081   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 19:57:45.001115   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 19:57:45.001161   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001176   93254 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 19:57:45.001180   93254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 19:57:45.001205   93254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 19:57:45.001250   93254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.ha-791576-m04 san=[127.0.0.1 192.168.49.5 ha-791576-m04 localhost minikube]
	I1202 19:57:45.318146   93254 provision.go:177] copyRemoteCerts
	I1202 19:57:45.318219   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:57:45.318283   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.341445   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:45.449731   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 19:57:45.449820   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:57:45.472182   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 19:57:45.472243   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:57:45.492286   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 19:57:45.492350   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:57:45.510812   93254 provision.go:87] duration metric: took 531.109583ms to configureAuth
	I1202 19:57:45.510841   93254 ubuntu.go:206] setting minikube options for container-runtime
	I1202 19:57:45.511124   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:45.511270   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.531424   93254 main.go:143] libmachine: Using SSH client type: native
	I1202 19:57:45.532066   93254 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1202 19:57:45.532093   93254 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:57:45.884616   93254 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:57:45.884638   93254 machine.go:97] duration metric: took 4.468325015s to provisionDockerMachine
	I1202 19:57:45.884650   93254 start.go:293] postStartSetup for "ha-791576-m04" (driver="docker")
	I1202 19:57:45.884699   93254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:57:45.884775   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:57:45.884823   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:45.903688   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.015544   93254 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:57:46.019398   93254 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 19:57:46.019427   93254 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 19:57:46.019438   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 19:57:46.019497   93254 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 19:57:46.019580   93254 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 19:57:46.019594   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /etc/ssl/certs/44702.pem
	I1202 19:57:46.019695   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 19:57:46.027313   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:46.046534   93254 start.go:296] duration metric: took 161.868987ms for postStartSetup
	I1202 19:57:46.046614   93254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:57:46.046664   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.064651   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.170656   93254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 19:57:46.175466   93254 fix.go:56] duration metric: took 5.194844037s for fixHost
	I1202 19:57:46.175488   93254 start.go:83] releasing machines lock for "ha-791576-m04", held for 5.194888303s
	I1202 19:57:46.175556   93254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:57:46.195693   93254 out.go:179] * Found network options:
	I1202 19:57:46.198432   93254 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 19:57:46.201295   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201328   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201354   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	W1202 19:57:46.201369   93254 proxy.go:120] fail to check proxy env: Error ip not in block
	I1202 19:57:46.201448   93254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:57:46.201500   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.201866   93254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:57:46.201941   93254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:57:46.219848   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.241958   93254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:57:46.425303   93254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:57:46.430326   93254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:57:46.430443   93254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:57:46.438789   93254 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 19:57:46.438867   93254 start.go:496] detecting cgroup driver to use...
	I1202 19:57:46.438915   93254 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 19:57:46.439004   93254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:57:46.456655   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:57:46.471141   93254 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:57:46.471238   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:57:46.496759   93254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:57:46.510741   93254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:57:46.633508   93254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:57:46.765301   93254 docker.go:234] disabling docker service ...
	I1202 19:57:46.765415   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:57:46.780559   93254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:57:46.793987   93254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:57:46.911887   93254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:57:47.041997   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:57:47.056582   93254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:57:47.071233   93254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:57:47.071325   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.080316   93254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:57:47.080415   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.090821   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.100556   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.110245   93254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:57:47.121207   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.131994   93254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.141137   93254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:57:47.150939   93254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:57:47.158669   93254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:57:47.166378   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:47.292693   93254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:57:47.494962   93254 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:57:47.495081   93254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:57:47.499951   93254 start.go:564] Will wait 60s for crictl version
	I1202 19:57:47.500031   93254 ssh_runner.go:195] Run: which crictl
	I1202 19:57:47.503579   93254 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 19:57:47.538410   93254 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 19:57:47.538551   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.570927   93254 ssh_runner.go:195] Run: crio --version
	I1202 19:57:47.607710   93254 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 19:57:47.610516   93254 out.go:179]   - env NO_PROXY=192.168.49.2
	I1202 19:57:47.613449   93254 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 19:57:47.616291   93254 cli_runner.go:164] Run: docker network inspect ha-791576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 19:57:47.633448   93254 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 19:57:47.637365   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:47.649386   93254 mustload.go:66] Loading cluster: ha-791576
	I1202 19:57:47.649615   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:47.649896   93254 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:57:47.667951   93254 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:57:47.668231   93254 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576 for IP: 192.168.49.5
	I1202 19:57:47.668239   93254 certs.go:195] generating shared ca certs ...
	I1202 19:57:47.668253   93254 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:57:47.668379   93254 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 19:57:47.668418   93254 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 19:57:47.668429   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 19:57:47.668440   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 19:57:47.668450   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 19:57:47.668462   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 19:57:47.668518   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 19:57:47.668548   93254 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 19:57:47.668557   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:57:47.668584   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:57:47.668607   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:57:47.668629   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 19:57:47.668673   93254 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 19:57:47.668703   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.668715   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.668726   93254 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem -> /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.668743   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:57:47.691818   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:57:47.709295   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:57:47.728849   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 19:57:47.751519   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 19:57:47.769113   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:57:47.789898   93254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 19:57:47.811416   93254 ssh_runner.go:195] Run: openssl version
	I1202 19:57:47.817999   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 19:57:47.826285   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.829982   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.830054   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 19:57:47.872757   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 19:57:47.880633   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:57:47.889438   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893309   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.893421   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:57:47.934334   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:57:47.942513   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 19:57:47.950820   93254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955232   93254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 19:57:47.955298   93254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 19:57:48.000169   93254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 19:57:48.008314   93254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:57:48.014820   93254 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:57:48.014881   93254 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1202 19:57:48.014972   93254 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-791576-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-791576 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:57:48.015054   93254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:57:48.026264   93254 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:57:48.026381   93254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1202 19:57:48.034605   93254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 19:57:48.048065   93254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:57:48.063803   93254 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 19:57:48.067995   93254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:57:48.077597   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.208286   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.223948   93254 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1202 19:57:48.224395   93254 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:57:48.229649   93254 out.go:179] * Verifying Kubernetes components...
	I1202 19:57:48.232645   93254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:57:48.363476   93254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:57:48.379483   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 19:57:48.379562   93254 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 19:57:48.379785   93254 node_ready.go:35] waiting up to 6m0s for node "ha-791576-m04" to be "Ready" ...
	W1202 19:57:50.383622   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:52.383990   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	W1202 19:57:54.883829   93254 node_ready.go:57] node "ha-791576-m04" has "Ready":"Unknown" status (will retry)
	I1202 19:57:55.884383   93254 node_ready.go:49] node "ha-791576-m04" is "Ready"
	I1202 19:57:55.884416   93254 node_ready.go:38] duration metric: took 7.504611892s for node "ha-791576-m04" to be "Ready" ...
	I1202 19:57:55.884429   93254 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:57:55.884499   93254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:57:55.899211   93254 system_svc.go:56] duration metric: took 14.774003ms WaitForService to wait for kubelet
	I1202 19:57:55.899239   93254 kubeadm.go:587] duration metric: took 7.675249996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:57:55.899279   93254 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:57:55.902757   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902783   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902794   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902800   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902805   93254 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 19:57:55.902809   93254 node_conditions.go:123] node cpu capacity is 2
	I1202 19:57:55.902813   93254 node_conditions.go:105] duration metric: took 3.530143ms to run NodePressure ...
	I1202 19:57:55.902825   93254 start.go:242] waiting for startup goroutines ...
	I1202 19:57:55.902850   93254 start.go:256] writing updated cluster config ...
	I1202 19:57:55.903157   93254 ssh_runner.go:195] Run: rm -f paused
	I1202 19:57:55.907062   93254 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:57:55.907561   93254 kapi.go:59] client config for ha-791576: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/ha-791576/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 19:57:55.926185   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:57:57.936730   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:00.437098   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:02.936225   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:04.937647   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:07.433127   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:09.433300   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:11.439409   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:13.936991   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:16.432700   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	W1202 19:58:18.432998   93254 pod_ready.go:104] pod "coredns-66bc5c9577-hw99j" is not "Ready", error: <nil>
	I1202 19:58:19.936601   93254 pod_ready.go:94] pod "coredns-66bc5c9577-hw99j" is "Ready"
	I1202 19:58:19.936627   93254 pod_ready.go:86] duration metric: took 24.01037278s for pod "coredns-66bc5c9577-hw99j" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.936639   93254 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.946385   93254 pod_ready.go:94] pod "coredns-66bc5c9577-w2245" is "Ready"
	I1202 19:58:19.946408   93254 pod_ready.go:86] duration metric: took 9.76284ms for pod "coredns-66bc5c9577-w2245" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.950499   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967558   93254 pod_ready.go:94] pod "etcd-ha-791576" is "Ready"
	I1202 19:58:19.967580   93254 pod_ready.go:86] duration metric: took 17.043001ms for pod "etcd-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.967589   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983217   93254 pod_ready.go:94] pod "etcd-ha-791576-m02" is "Ready"
	I1202 19:58:19.983312   93254 pod_ready.go:86] duration metric: took 15.715518ms for pod "etcd-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:19.983336   93254 pod_ready.go:83] waiting for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.126953   93254 request.go:683] "Waited before sending request" delay="135.197879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:20.129983   93254 pod_ready.go:99] pod "etcd-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "etcd-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:20.130062   93254 pod_ready.go:86] duration metric: took 146.705626ms for pod "etcd-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.327487   93254 request.go:683] "Waited before sending request" delay="197.274849ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1202 19:58:20.331946   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.527354   93254 request.go:683] "Waited before sending request" delay="195.301984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576"
	I1202 19:58:20.726783   93254 request.go:683] "Waited before sending request" delay="195.232619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:20.729884   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576" is "Ready"
	I1202 19:58:20.729911   93254 pod_ready.go:86] duration metric: took 397.935401ms for pod "kube-apiserver-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.729921   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:20.927333   93254 request.go:683] "Waited before sending request" delay="197.344927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m02"
	I1202 19:58:21.127530   93254 request.go:683] "Waited before sending request" delay="195.226515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m02"
	I1202 19:58:21.134380   93254 pod_ready.go:94] pod "kube-apiserver-ha-791576-m02" is "Ready"
	I1202 19:58:21.134412   93254 pod_ready.go:86] duration metric: took 404.483988ms for pod "kube-apiserver-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.134423   93254 pod_ready.go:83] waiting for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.326813   93254 request.go:683] "Waited before sending request" delay="192.320431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-791576-m03"
	I1202 19:58:21.527439   93254 request.go:683] "Waited before sending request" delay="197.329437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576-m03"
	I1202 19:58:21.533492   93254 pod_ready.go:99] pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace is gone: node "ha-791576-m03" hosting pod "kube-apiserver-ha-791576-m03" is not found/running (skipping!): nodes "ha-791576-m03" not found
	I1202 19:58:21.533559   93254 pod_ready.go:86] duration metric: took 399.129563ms for pod "kube-apiserver-ha-791576-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.727056   93254 request.go:683] "Waited before sending request" delay="193.360691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1202 19:58:21.730488   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:21.926811   93254 request.go:683] "Waited before sending request" delay="196.233661ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.127186   93254 request.go:683] "Waited before sending request" delay="194.445087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.326846   93254 request.go:683] "Waited before sending request" delay="96.137701ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-791576"
	I1202 19:58:22.527173   93254 request.go:683] "Waited before sending request" delay="197.340316ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:22.927176   93254 request.go:683] "Waited before sending request" delay="193.337028ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	I1202 19:58:23.326849   93254 request.go:683] "Waited before sending request" delay="93.266332ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-791576"
	W1202 19:58:23.736689   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:25.737056   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:27.748280   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:30.236783   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:32.236980   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:34.736941   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	W1202 19:58:37.237158   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576" is not "Ready", error: <nil>
	I1202 19:58:38.237174   93254 pod_ready.go:94] pod "kube-controller-manager-ha-791576" is "Ready"
	I1202 19:58:38.237206   93254 pod_ready.go:86] duration metric: took 16.506691586s for pod "kube-controller-manager-ha-791576" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:58:38.237217   93254 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 19:58:40.244619   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:42.254491   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:44.742876   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:46.743816   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:49.244146   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:51.244844   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:53.742978   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:55.743809   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:58:58.244614   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:00.270137   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:02.744270   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:04.744321   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:07.244122   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:09.253242   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:11.744525   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:14.244287   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:16.743480   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:18.743527   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:20.744157   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:22.744418   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:25.244307   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:27.244638   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:29.747394   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:32.243699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:34.244795   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:36.744345   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:39.244487   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:41.743981   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:44.244128   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:46.743606   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:49.243339   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:51.244231   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:53.743102   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:56.242882   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 19:59:58.243182   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:00.266823   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:02.745097   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:05.243680   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:07.244023   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:09.743730   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:12.243875   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:14.744016   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:17.243913   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:19.244051   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:21.244857   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:23.743729   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:25.744255   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:27.744400   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:30.244688   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:32.247066   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:34.743523   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:37.244239   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:39.743699   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:41.744670   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:44.244162   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:46.743513   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:49.245392   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:51.744149   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:54.248947   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:56.743993   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:00:59.244304   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:01.246223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:03.744505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:06.243892   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:08.743156   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:10.743380   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:12.744647   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:15.244219   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:17.744350   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:20.243654   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:22.245725   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:24.247107   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:26.743319   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:28.743362   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:30.744276   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:33.243318   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:35.245433   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:37.743505   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:39.745223   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:42.248295   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:44.742894   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:46.744704   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:49.243457   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:51.244130   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	W1202 20:01:53.745924   93254 pod_ready.go:104] pod "kube-controller-manager-ha-791576-m02" is not "Ready", error: <nil>
	I1202 20:01:55.907841   93254 pod_ready.go:86] duration metric: took 3m17.670596483s for pod "kube-controller-manager-ha-791576-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:01:55.907902   93254 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1202 20:01:55.907923   93254 pod_ready.go:40] duration metric: took 4m0.000821875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:01:55.911296   93254 out.go:203] 
	W1202 20:01:55.914260   93254 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1202 20:01:55.917058   93254 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.66851571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.69141266Z" level=info msg="Created container d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398: kube-system/storage-provisioner/storage-provisioner" id=1b10ff43-5e40-4558-8196-1d7f016dd505 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.692654188Z" level=info msg="Starting container: d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398" id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:08 ha-791576 crio[669]: time="2025-12-02T19:58:08.694389348Z" level=info msg="Started container" PID=1429 containerID=d355d98782252d734c2f2c33f47ba5789709b26cbd3428c8ef63575ff148f398 description=kube-system/storage-provisioner/storage-provisioner id=1c87f7b0-7024-41ae-99fe-2425cae60e3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=efd793dccee0e2915ee98b405885350b8a60e3279add6b36c21a4428221c8a01
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.202100018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206090778Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206127076Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.206153939Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209705243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209867823Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.209904696Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213036515Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213066955Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.213094302Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.21610966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 19:58:18 ha-791576 crio[669]: time="2025-12-02T19:58:18.216139813Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.228833217Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=39ed74a3-84e9-4181-80c6-ff0f611a3e84 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.23041474Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=10f326ec-4b42-40a0-bdba-06b31bdd4438 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233901241Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.233996722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.250249794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.252295749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.2704154Z" level=info msg="Created container 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4: kube-system/kube-controller-manager-ha-791576/kube-controller-manager" id=d524785c-b64f-418f-8cc7-4f78914e9ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.274529003Z" level=info msg="Starting container: 2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4" id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 19:58:26 ha-791576 crio[669]: time="2025-12-02T19:58:26.277250428Z" level=info msg="Started container" PID=1479 containerID=2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4 description=kube-system/kube-controller-manager-ha-791576/kube-controller-manager id=2b730746-da1e-4be4-b3ea-e96c0259c15d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4659c27a1e2a230e86c92853e4a009f926841d3b7dc58fbc2c2a31be03f223b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	2f22118538832       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   5 minutes ago       Running             kube-controller-manager   7                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	d355d98782252       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       5                   efd793dccee0e       storage-provisioner                 kube-system
	c5b23f7fd12dd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   1                   083931905fb04       busybox-7b57f96db7-l5g8z            default
	5c0daa7c8d4e1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       4                   efd793dccee0e       storage-provisioner                 kube-system
	a7c674fd4beed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   0b0e4231caf19       coredns-66bc5c9577-w2245            kube-system
	1fa21535998b0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   cb80d052040d5       coredns-66bc5c9577-hw99j            kube-system
	355934c2fc929       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   6 minutes ago       Running             kube-proxy                2                   16e723f810dce       kube-proxy-q5vfv                    kube-system
	02e772d860e77       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               2                   9223b1241d5be       kindnet-m2l5j                       kube-system
	ad2e9bee4038e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   6                   4659c27a1e2a2       kube-controller-manager-ha-791576   kube-system
	7193dbe9e1382       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   4b7e6eb9253e6       kube-scheduler-ha-791576            kube-system
	53ec2f9388eca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            2                   11498d51b1e18       kube-apiserver-ha-791576            kube-system
	9e7e710fc30aa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  2                   447647f67c33c       kube-vip-ha-791576                  kube-system
	935b971802eea       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   5c5f7b2e5b8f1       etcd-ha-791576                      kube-system
	
	
	==> coredns [1fa21535998b03372b957beaac33c0db2b71496fe539f42e2245c5ea3ba2d6e9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47259 - 63703 "HINFO IN 335106981740875206.600763774367396684. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.032064587s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a7c674fd4beedc2112aa22c1ce1eee71496d5b6be459181558118d06ad4a8445] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59040 - 1455 "HINFO IN 6249761343778063196.7050624658331465362. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039193622s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-791576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_41_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:41:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:03:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:03:34 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:03:34 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:03:34 +0000   Tue, 02 Dec 2025 19:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:03:34 +0000   Tue, 02 Dec 2025 19:47:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-791576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                2cbc5f56-f69a-4743-bfe0-c26cb688e6dd
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l5g8z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-hw99j             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 coredns-66bc5c9577-w2245             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 etcd-ha-791576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-m2l5j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-791576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-791576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-q5vfv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-791576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-791576                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 22m                    kube-proxy       
	  Normal   Starting                 22m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     22m (x8 over 22m)      kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    22m (x8 over 22m)      kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  22m (x8 over 22m)      kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                    kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  22m                    kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     22m                    kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 22m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           22m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeReady                21m                    kubelet          Node ha-791576 status is now: NodeReady
	  Normal   RegisteredNode           20m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node ha-791576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node ha-791576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x8 over 7m48s)  kubelet          Node ha-791576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	  Normal   RegisteredNode           61s                    node-controller  Node ha-791576 event: Registered Node ha-791576 in Controller
	
	
	Name:               ha-791576-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:03:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:02:58 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:02:58 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:02:58 +0000   Tue, 02 Dec 2025 19:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:02:58 +0000   Tue, 02 Dec 2025 19:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-791576-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dee40d7f-dceb-491c-be1b-bbfe6e5bbf5d
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-npkff                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-791576-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-ksng5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-791576-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-791576-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-pjkt7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-791576-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-791576-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21m                    kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 5m18s                  kube-proxy       
	  Normal   RegisteredNode           21m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)      kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   Starting                 7m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m44s (x8 over 7m45s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m44s (x8 over 7m45s)  kubelet          Node ha-791576-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m44s (x8 over 7m45s)  kubelet          Node ha-791576-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        6m45s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	  Normal   RegisteredNode           61s                    node-controller  Node ha-791576-m02 event: Registered Node ha-791576-m02 in Controller
	
	
	Name:               ha-791576-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T19_44_30_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:02:20 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:02:20 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:02:20 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:02:20 +0000   Tue, 02 Dec 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-791576-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                368f8765-e8de-4d0d-9ce4-3a1b12660712
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-k9bh8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kindnet-8zbzj               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-proxy-4tffm            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m36s                  kube-proxy       
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   NodeHasSufficientPID     19m (x3 over 19m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 19m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m (x3 over 19m)      kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x3 over 19m)      kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           19m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           19m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-791576-m04 status is now: NodeReady
	  Normal   RegisteredNode           17m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   NodeNotReady             15m                    node-controller  Node ha-791576-m04 status is now: NodeNotReady
	  Normal   Starting                 5m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m54s (x8 over 5m57s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m54s (x8 over 5m57s)  kubelet          Node ha-791576-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m54s (x8 over 5m57s)  kubelet          Node ha-791576-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	  Normal   RegisteredNode           61s                    node-controller  Node ha-791576-m04 event: Registered Node ha-791576-m04 in Controller
	
	
	Name:               ha-791576-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-791576-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=ha-791576
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_02T20_02_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:02:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-791576-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:03:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:03:35 +0000   Tue, 02 Dec 2025 20:02:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:03:35 +0000   Tue, 02 Dec 2025 20:02:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:03:35 +0000   Tue, 02 Dec 2025 20:02:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:03:35 +0000   Tue, 02 Dec 2025 20:03:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-791576-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                e35036aa-af90-47f8-a8b8-72c94885fd04
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-791576-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-glctp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-ha-791576-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-ha-791576-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-2rjjx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-ha-791576-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-vip-ha-791576-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        43s   kube-proxy       
	  Normal  RegisteredNode  56s   node-controller  Node ha-791576-m05 event: Registered Node ha-791576-m05 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node ha-791576-m05 event: Registered Node ha-791576-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.030462] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032317] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.746139] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.396651] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 2 18:47] hrtimer: interrupt took 31528848 ns
	[Dec 2 18:50] overlayfs: idmapped layers are currently not supported
	[  +0.068869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 2 18:55] overlayfs: idmapped layers are currently not supported
	[Dec 2 18:56] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:09] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:41] overlayfs: idmapped layers are currently not supported
	[ +32.622792] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:44] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:45] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:46] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:55] overlayfs: idmapped layers are currently not supported
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [935b971802eea43815b6a2ba78749d6f6a65dfeb75a70453def4a7ff8c6e8f29] <==
	{"level":"info","ts":"2025-12-02T20:02:30.322629Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"warn","ts":"2025-12-02T20:02:30.550286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:57320","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:02:30.552490Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":7843840,"size":"7.8 MB"}
	{"level":"error","ts":"2025-12-02T20:02:30.564614Z","caller":"etcdserver/server.go:1601","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1601\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1542\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1514\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1466\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoo
gle.golang.org/grpc@v1.71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-12-02T20:02:30.693802Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":5071,"remote-peer-id":"6b1440570a7e710a","bytes":7852987,"size":"7.9 MB"}
	{"level":"warn","ts":"2025-12-02T20:02:30.800834Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:30.808913Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:02:30.991914Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6b1440570a7e710a","error":"failed to write 6b1440570a7e710a on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:33278: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-02T20:02:30.992104Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"warn","ts":"2025-12-02T20:02:30.994276Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.032971Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6b1440570a7e710a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-02T20:02:31.033434Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.033491Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.062926Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.071176Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4579929246608719274 7715862804174893322 12593026477526642892)"}
	{"level":"info","ts":"2025-12-02T20:02:31.071402Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.071470Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.085870Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6b1440570a7e710a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-02T20:02:31.085988Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:31.157954Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6b1440570a7e710a"}
	{"level":"info","ts":"2025-12-02T20:02:40.734073Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-02T20:02:54.946918Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-02T20:03:00.694661Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6b1440570a7e710a","bytes":7852987,"size":"7.9 MB","took":"30.769671008s"}
	{"level":"warn","ts":"2025-12-02T20:03:39.737943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.365145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:368661"}
	{"level":"info","ts":"2025-12-02T20:03:39.738017Z","caller":"traceutil/trace.go:172","msg":"trace[436232768] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:4582; }","duration":"176.455874ms","start":"2025-12-02T20:03:39.561549Z","end":"2025-12-02T20:03:39.738005Z","steps":["trace[436232768] 'range keys from bolt db'  (duration: 175.279758ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:03:40 up  1:45,  0 user,  load average: 1.07, 1.41, 1.40
	Linux ha-791576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02e772d860e77006ec0b051223b10e67de2ed41ecc1b18874de331cdb32bd1a6] <==
	I1202 20:03:08.207976       1 main.go:324] Node ha-791576-m05 has CIDR [10.244.2.0/24] 
	I1202 20:03:18.205883       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1202 20:03:18.205921       1 main.go:324] Node ha-791576-m05 has CIDR [10.244.2.0/24] 
	I1202 20:03:18.206101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:03:18.206120       1 main.go:301] handling current node
	I1202 20:03:18.206136       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:03:18.206142       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:03:18.206252       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:03:18.206267       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:03:28.203581       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1202 20:03:28.203632       1 main.go:324] Node ha-791576-m05 has CIDR [10.244.2.0/24] 
	I1202 20:03:28.203839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:03:28.203856       1 main.go:301] handling current node
	I1202 20:03:28.203870       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:03:28.203875       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:03:28.203970       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:03:28.203983       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:03:38.202515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 20:03:38.202632       1 main.go:301] handling current node
	I1202 20:03:38.202693       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 20:03:38.202801       1 main.go:324] Node ha-791576-m02 has CIDR [10.244.1.0/24] 
	I1202 20:03:38.202953       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 20:03:38.202990       1 main.go:324] Node ha-791576-m04 has CIDR [10.244.3.0/24] 
	I1202 20:03:38.203090       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1202 20:03:38.203126       1 main.go:324] Node ha-791576-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [53ec2f9388ecacb74421a2e8c3b5d943afd06e705e756948fa12bc41dd8a37f9] <==
	{"level":"warn","ts":"2025-12-02T19:57:37.266812Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.266832Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001d01680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274392Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a21a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274785Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40025223c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274836Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001283860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274869Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001e212c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274899Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274921Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274946Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002889680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274966Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f383c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.274993Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028881e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275010Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023a65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275027Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f394a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275097Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400248da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.275220Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028890e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279316Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028541e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-02T19:57:37.279511Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000c8fa40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1202 19:57:37.337298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	{"level":"warn","ts":"2025-12-02T19:57:38.096782Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015d23c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1202 19:57:38.096878       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.128576061s, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
	I1202 19:57:40.624545       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1202 19:57:40.936228       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 19:58:29.433907       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 19:58:31.983629       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 19:58:32.004810       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2f221185388324fe71f910f8826c030ea12b330e8eb71520707267952a5db8f4] <==
	E1202 19:59:09.180918       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	E1202 19:59:09.180924       1 gc_controller.go:151] "Failed to get node" err="node \"ha-791576-m03\" not found" logger="pod-garbage-collector-controller" node="ha-791576-m03"
	I1202 19:59:09.200950       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233282       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-791576-m03"
	I1202 19:59:09.233391       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267544       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-xjn7v"
	I1202 19:59:09.267590       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.304785       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-791576-m03"
	I1202 19:59:09.305077       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339802       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-791576-m03"
	I1202 19:59:09.339845       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388801       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-791576-m03"
	I1202 19:59:09.388937       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.431739       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dvt58"
	I1202 19:59:09.432083       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469146       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="default/busybox-7b57f96db7-zjghb"
	I1202 19:59:09.469262       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512224       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-791576-m03"
	I1202 19:59:09.512321       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	I1202 19:59:09.551464       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pf27"
	I1202 20:02:40.221475       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-791576-m04"
	I1202 20:02:40.223507       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-791576-m05\" does not exist"
	I1202 20:02:40.244730       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-791576-m05" podCIDRs=["10.244.2.0/24"]
	I1202 20:02:44.548839       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-791576-m05"
	I1202 20:03:35.322436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-791576-m04"
	
	
	==> kube-controller-manager [ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b] <==
	I1202 19:57:21.480081       1 serving.go:386] Generated self-signed cert in-memory
	I1202 19:57:22.307047       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 19:57:22.307083       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:22.308866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 19:57:22.309043       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:57:22.309144       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 19:57:22.309457       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1202 19:57:37.311326       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [355934c2fc92908a3d014373a10e2ad38fde6cd637a204a613dd4cf27e58d5de] <==
	I1202 19:57:38.434579       1 server_linux.go:53] "Using iptables proxy"
	I1202 19:57:38.599480       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:57:38.700098       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:57:38.700208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 19:57:38.700313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:57:38.806652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 19:57:38.806864       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:57:38.840406       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:57:38.840778       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:57:38.840994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:57:38.842280       1 config.go:200] "Starting service config controller"
	I1202 19:57:38.842343       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:57:38.842391       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:57:38.842435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:57:38.842472       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:57:38.842507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:57:38.849620       1 config.go:309] "Starting node config controller"
	I1202 19:57:38.849733       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:57:38.849766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:57:38.946880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:57:38.946930       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 19:57:38.946999       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7193dbe9e138217968055549ef0c321456d1ba0d688ed39c88faecd90d288068] <==
	I1202 19:56:01.445556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445721       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 19:56:01.445867       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 19:56:01.446001       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 19:56:01.545921       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1202 19:58:29.288494       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	E1202 19:58:29.288769       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4eb2efb8-62a6-4a52-bafd-ddc9837ef293(default/busybox-7b57f96db7-k9bh8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	E1202 19:58:29.288838       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-k9bh8\": pod busybox-7b57f96db7-k9bh8 is already assigned to node \"ha-791576-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-k9bh8"
	I1202 19:58:29.290780       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-k9bh8" node="ha-791576-m04"
	E1202 20:02:40.314854       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2rjjx\": pod kube-proxy-2rjjx is already assigned to node \"ha-791576-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2rjjx" node="ha-791576-m05"
	E1202 20:02:40.315095       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod cfe6ad34-3835-463e-b2d2-d29becfb875a(kube-system/kube-proxy-2rjjx) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-2rjjx"
	E1202 20:02:40.315202       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2rjjx\": pod kube-proxy-2rjjx is already assigned to node \"ha-791576-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-2rjjx"
	I1202 20:02:40.320672       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2rjjx" node="ha-791576-m05"
	E1202 20:02:40.407113       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-glctp\": pod kindnet-glctp is already assigned to node \"ha-791576-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-glctp" node="ha-791576-m05"
	E1202 20:02:40.407259       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-glctp\": pod kindnet-glctp is already assigned to node \"ha-791576-m05\"" logger="UnhandledError" pod="kube-system/kindnet-glctp"
	I1202 20:02:40.407324       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-glctp" node="ha-791576-m05"
	E1202 20:02:40.647805       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5cmpp\": pod kindnet-5cmpp is already assigned to node \"ha-791576-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-5cmpp" node="ha-791576-m05"
	E1202 20:02:40.648012       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod dff08483-99cb-4fb2-bb3c-b9086ed8e48a(kube-system/kindnet-5cmpp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-5cmpp"
	E1202 20:02:40.648092       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5cmpp\": pod kindnet-5cmpp is already assigned to node \"ha-791576-m05\"" logger="UnhandledError" pod="kube-system/kindnet-5cmpp"
	E1202 20:02:40.647953       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fn2n9\": pod kube-proxy-fn2n9 is already assigned to node \"ha-791576-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fn2n9" node="ha-791576-m05"
	E1202 20:02:40.648218       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4212293e-3dd4-4df9-933c-ef18066ef86e(kube-system/kube-proxy-fn2n9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-fn2n9"
	I1202 20:02:40.653748       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5cmpp" node="ha-791576-m05"
	E1202 20:02:40.653966       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fn2n9\": pod kube-proxy-fn2n9 is already assigned to node \"ha-791576-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-fn2n9"
	I1202 20:02:40.654464       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fn2n9" node="ha-791576-m05"
	E1202 20:02:40.754069       1 schedule_one.go:1128] "Error updating pod" err="pods \"kube-proxy-fn2n9\" not found" logger="UnhandledError" pod="kube-system/kube-proxy-fn2n9"
	
	
	==> kubelet <==
	Dec 02 19:57:23 ha-791576 kubelet[806]: E1202 19:57:23.174754     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T19:57:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-791576\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-791576/status?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:32 ha-791576 kubelet[806]: E1202 19:57:32.339488     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-791576?timeout=10s\": context deadline exceeded" interval="800ms"
	Dec 02 19:57:33 ha-791576 kubelet[806]: E1202 19:57:33.176339     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-791576\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-791576?timeout=10s\": context deadline exceeded"
	Dec 02 19:57:35 ha-791576 kubelet[806]: E1202 19:57:35.968777     806 kubelet.go:3222] "Failed creating a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-ha-791576"
	Dec 02 19:57:35 ha-791576 kubelet[806]: I1202 19:57:35.968822     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.477293     806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.550540     806 scope.go:117] "RemoveContainer" containerID="1481b78f0b49db2c5b77d1f4b1a48f1606d7b5b7efc574d9920be0dcf7d60944"
	Dec 02 19:57:37 ha-791576 kubelet[806]: I1202 19:57:37.551052     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:37 ha-791576 kubelet[806]: E1202 19:57:37.551183     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:38 ha-791576 kubelet[806]: W1202 19:57:38.005619     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio-083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c WatchSource:0}: Error finding container 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c: Status 404 returned error can't find the container with id 083931905fb04943c2abf13e4ba03c75427ff94e158bd40b0084d0625f099b3c
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.163483     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-791576\" already exists" pod="kube-system/kube-scheduler-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.163520     806 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.241716     806 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-791576\" already exists" pod="kube-system/kube-vip-ha-791576"
	Dec 02 19:57:38 ha-791576 kubelet[806]: I1202 19:57:38.576730     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:38 ha-791576 kubelet[806]: E1202 19:57:38.577312     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:45 ha-791576 kubelet[806]: I1202 19:57:45.547133     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:45 ha-791576 kubelet[806]: E1202 19:57:45.547777     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.235433     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9047b34b16f7f1aeb5b86610976368ec3265e72120dd291f6ef7165fbdb40f01/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/4.log: no such file or directory
	Dec 02 19:57:51 ha-791576 kubelet[806]: E1202 19:57:51.237620     806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/11770d173b0bf8e21fa767a44a6b06c28990c5d024bd0ff30f895a2c8315127e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-791576_eb073a7b89b0145d88727f941b3980dc/kube-controller-manager/5.log: no such file or directory
	Dec 02 19:57:58 ha-791576 kubelet[806]: I1202 19:57:58.228513     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:57:58 ha-791576 kubelet[806]: E1202 19:57:58.229379     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:08 ha-791576 kubelet[806]: I1202 19:58:08.659780     806 scope.go:117] "RemoveContainer" containerID="5c0daa7c8d4e1a9a2a77b1849e4249d4f9f28faa84c47fbc750bdf4924430591"
	Dec 02 19:58:11 ha-791576 kubelet[806]: I1202 19:58:11.230446     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	Dec 02 19:58:11 ha-791576 kubelet[806]: E1202 19:58:11.230623     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-791576_kube-system(eb073a7b89b0145d88727f941b3980dc)\"" pod="kube-system/kube-controller-manager-ha-791576" podUID="eb073a7b89b0145d88727f941b3980dc"
	Dec 02 19:58:26 ha-791576 kubelet[806]: I1202 19:58:26.228365     806 scope.go:117] "RemoveContainer" containerID="ad2e9bee4038ec5c5a0d947ee70e18e02644bca0aa4613c1bab63c216afebe8b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-791576 -n ha-791576
helpers_test.go:269: (dbg) Run:  kubectl --context ha-791576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-289137 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-289137 --output=json --user=testUser: exit status 80 (2.485965048s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"098f458b-dd89-4c82-8e8c-3f15958691cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-289137 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"26634752-bdf8-43a8-8e2b-b8558a09e654","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T20:05:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0a201797-e4e4-443f-9890-94f349fb81dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-289137 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.49s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.89s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-289137 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-289137 --output=json --user=testUser: exit status 80 (1.89289973s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5085ccd-b29b-45fc-91ed-0b23f08caba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-289137 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0f32f2f4-d29d-486f-ba43-cec1036ab4d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-02T20:05:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"0b158ee0-ce05-46a9-b65e-49d664959c29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-289137 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (794.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.129054944s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-080046
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-080046: (1.322323529s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-080046 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-080046 status --format={{.Host}}: exit status 7 (74.690748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m32.745757983s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-080046] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-080046" primary control-plane node in "kubernetes-upgrade-080046" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:23:53.564669  181375 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:23:53.564861  181375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:23:53.564892  181375 out.go:374] Setting ErrFile to fd 2...
	I1202 20:23:53.564913  181375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:23:53.565304  181375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:23:53.565811  181375 out.go:368] Setting JSON to false
	I1202 20:23:53.566696  181375 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7572,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:23:53.566824  181375 start.go:143] virtualization:  
	I1202 20:23:53.572396  181375 out.go:179] * [kubernetes-upgrade-080046] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:23:53.575259  181375 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 20:23:53.575410  181375 notify.go:221] Checking for updates...
	I1202 20:23:53.580835  181375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:23:53.584023  181375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:23:53.586933  181375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 20:23:53.589769  181375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 20:23:53.592802  181375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:23:53.596257  181375 config.go:182] Loaded profile config "kubernetes-upgrade-080046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1202 20:23:53.596859  181375 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:23:53.620224  181375 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:23:53.620334  181375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:23:53.676127  181375 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 20:23:53.667018393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:23:53.676233  181375 docker.go:319] overlay module found
	I1202 20:23:53.679456  181375 out.go:179] * Using the docker driver based on existing profile
	I1202 20:23:53.682100  181375 start.go:309] selected driver: docker
	I1202 20:23:53.682124  181375 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-080046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-080046 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:23:53.682231  181375 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:23:53.682946  181375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:23:53.734183  181375 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 20:23:53.725565428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:23:53.734533  181375 cni.go:84] Creating CNI manager for ""
	I1202 20:23:53.734606  181375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:23:53.734648  181375 start.go:353] cluster config:
	{Name:kubernetes-upgrade-080046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-080046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:23:53.737692  181375 out.go:179] * Starting "kubernetes-upgrade-080046" primary control-plane node in "kubernetes-upgrade-080046" cluster
	I1202 20:23:53.740487  181375 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:23:53.743295  181375 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:23:53.746270  181375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:23:53.746346  181375 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:23:53.765911  181375 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:23:53.765934  181375 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 20:23:53.818640  181375 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1202 20:23:54.023193  181375 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1202 20:23:54.023341  181375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/config.json ...
	I1202 20:23:54.023477  181375 cache.go:107] acquiring lock: {Name:mk82385e98b3cea3b61a8b5a1a83eda944359c19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023563  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 20:23:54.023573  181375 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.038µs
	I1202 20:23:54.023580  181375 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:23:54.023586  181375 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 20:23:54.023598  181375 cache.go:107] acquiring lock: {Name:mk5c88154da726c0e877671804d288814a698d4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023618  181375 start.go:360] acquireMachinesLock for kubernetes-upgrade-080046: {Name:mk2c43a255c62e369902c6728688d721886daec5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023631  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 20:23:54.023643  181375 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 39.901µs
	I1202 20:23:54.023650  181375 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 20:23:54.023658  181375 start.go:364] duration metric: took 26.1µs to acquireMachinesLock for "kubernetes-upgrade-080046"
	I1202 20:23:54.023660  181375 cache.go:107] acquiring lock: {Name:mkde8d7e608ffe8b23880c21e9b074ae7de483e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023672  181375 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:23:54.023683  181375 fix.go:54] fixHost starting: 
	I1202 20:23:54.023691  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 20:23:54.023697  181375 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.136µs
	I1202 20:23:54.023703  181375 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 20:23:54.023715  181375 cache.go:107] acquiring lock: {Name:mk116a198b5f7f156ffafab505880cf93ef75453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023763  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 20:23:54.023769  181375 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 54.981µs
	I1202 20:23:54.023775  181375 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 20:23:54.023785  181375 cache.go:107] acquiring lock: {Name:mkcb2af264a35840ea5d8236f1658b753015a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023814  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 20:23:54.023819  181375 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 35.568µs
	I1202 20:23:54.023830  181375 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 20:23:54.023839  181375 cache.go:107] acquiring lock: {Name:mk8071f88de82d86e44710e4b18016f632332c3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023866  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 20:23:54.023875  181375 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33µs
	I1202 20:23:54.023881  181375 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 20:23:54.023891  181375 cache.go:107] acquiring lock: {Name:mk469eb005ec792b3ef7501814396993e4b1e54e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023915  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 20:23:54.023920  181375 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 30.908µs
	I1202 20:23:54.023925  181375 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 20:23:54.023934  181375 cache.go:107] acquiring lock: {Name:mk938f41a697907b7c64249e425b5f6814f5046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:23:54.023949  181375 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-080046 --format={{.State.Status}}
	I1202 20:23:54.023960  181375 cache.go:115] /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 20:23:54.023965  181375 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.196µs
	I1202 20:23:54.023970  181375 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 20:23:54.023978  181375 cache.go:87] Successfully saved all images to host disk.
	I1202 20:23:54.044769  181375 fix.go:112] recreateIfNeeded on kubernetes-upgrade-080046: state=Stopped err=<nil>
	W1202 20:23:54.044811  181375 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:23:54.048261  181375 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-080046" ...
	I1202 20:23:54.048333  181375 cli_runner.go:164] Run: docker start kubernetes-upgrade-080046
	I1202 20:23:54.420798  181375 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-080046 --format={{.State.Status}}
	I1202 20:23:54.447446  181375 kic.go:430] container "kubernetes-upgrade-080046" state is running.
	I1202 20:23:54.452588  181375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-080046
	I1202 20:23:54.488367  181375 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/config.json ...
	I1202 20:23:54.488578  181375 machine.go:94] provisionDockerMachine start ...
	I1202 20:23:54.488648  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:54.511464  181375 main.go:143] libmachine: Using SSH client type: native
	I1202 20:23:54.511788  181375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1202 20:23:54.511797  181375 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:23:54.512744  181375 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 20:23:57.677809  181375 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-080046
	
	I1202 20:23:57.677872  181375 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-080046"
	I1202 20:23:57.677974  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:57.699845  181375 main.go:143] libmachine: Using SSH client type: native
	I1202 20:23:57.700186  181375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1202 20:23:57.700197  181375 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-080046 && echo "kubernetes-upgrade-080046" | sudo tee /etc/hostname
	I1202 20:23:57.892070  181375 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-080046
	
	I1202 20:23:57.892237  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:57.925356  181375 main.go:143] libmachine: Using SSH client type: native
	I1202 20:23:57.925698  181375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1202 20:23:57.925715  181375 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-080046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-080046/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-080046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:23:58.094079  181375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:23:58.094109  181375 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 20:23:58.094148  181375 ubuntu.go:190] setting up certificates
	I1202 20:23:58.094158  181375 provision.go:84] configureAuth start
	I1202 20:23:58.094227  181375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-080046
	I1202 20:23:58.112943  181375 provision.go:143] copyHostCerts
	I1202 20:23:58.113018  181375 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 20:23:58.113032  181375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 20:23:58.113108  181375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 20:23:58.113223  181375 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 20:23:58.113235  181375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 20:23:58.113263  181375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 20:23:58.113324  181375 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 20:23:58.113332  181375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 20:23:58.113358  181375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 20:23:58.113457  181375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-080046 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-080046 localhost minikube]
	I1202 20:23:58.275555  181375 provision.go:177] copyRemoteCerts
	I1202 20:23:58.275649  181375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:23:58.275695  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:58.293372  181375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/kubernetes-upgrade-080046/id_rsa Username:docker}
	I1202 20:23:58.397340  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:23:58.415656  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1202 20:23:58.433323  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1202 20:23:58.449978  181375 provision.go:87] duration metric: took 355.798433ms to configureAuth
	I1202 20:23:58.450002  181375 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:23:58.450185  181375 config.go:182] Loaded profile config "kubernetes-upgrade-080046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:23:58.450289  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:58.467437  181375 main.go:143] libmachine: Using SSH client type: native
	I1202 20:23:58.467860  181375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1202 20:23:58.467884  181375 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:23:58.907056  181375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:23:58.907081  181375 machine.go:97] duration metric: took 4.418485931s to provisionDockerMachine
	I1202 20:23:58.907111  181375 start.go:293] postStartSetup for "kubernetes-upgrade-080046" (driver="docker")
	I1202 20:23:58.907135  181375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:23:58.907204  181375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:23:58.907245  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:58.927649  181375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/kubernetes-upgrade-080046/id_rsa Username:docker}
	I1202 20:23:59.044763  181375 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:23:59.052082  181375 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:23:59.052117  181375 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:23:59.052129  181375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 20:23:59.052181  181375 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 20:23:59.052265  181375 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 20:23:59.052372  181375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:23:59.082355  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:23:59.132752  181375 start.go:296] duration metric: took 225.615437ms for postStartSetup
	I1202 20:23:59.132826  181375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:23:59.132874  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:59.164794  181375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/kubernetes-upgrade-080046/id_rsa Username:docker}
	I1202 20:23:59.279644  181375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:23:59.288932  181375 fix.go:56] duration metric: took 5.265247933s for fixHost
	I1202 20:23:59.289066  181375 start.go:83] releasing machines lock for "kubernetes-upgrade-080046", held for 5.265399026s
	I1202 20:23:59.289242  181375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-080046
	I1202 20:23:59.322634  181375 ssh_runner.go:195] Run: cat /version.json
	I1202 20:23:59.322684  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:59.322945  181375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:23:59.322993  181375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-080046
	I1202 20:23:59.381068  181375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/kubernetes-upgrade-080046/id_rsa Username:docker}
	I1202 20:23:59.386479  181375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/kubernetes-upgrade-080046/id_rsa Username:docker}
	I1202 20:23:59.608813  181375 ssh_runner.go:195] Run: systemctl --version
	I1202 20:23:59.615910  181375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:23:59.680691  181375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:23:59.686404  181375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:23:59.686470  181375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:23:59.701319  181375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:23:59.701340  181375 start.go:496] detecting cgroup driver to use...
	I1202 20:23:59.701371  181375 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 20:23:59.701418  181375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:23:59.723737  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:23:59.740537  181375 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:23:59.740595  181375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:23:59.757403  181375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:23:59.778399  181375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:23:59.965340  181375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:24:00.291715  181375 docker.go:234] disabling docker service ...
	I1202 20:24:00.291814  181375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:24:00.338683  181375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:24:00.370957  181375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:24:00.610541  181375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:24:00.779629  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:24:00.796190  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:24:00.814211  181375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:24:00.814346  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.831296  181375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:24:00.831405  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.844278  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.854432  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.863719  181375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:24:00.873131  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.882918  181375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.891738  181375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:24:00.900499  181375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:24:00.908007  181375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:24:00.915628  181375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:24:01.033370  181375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:24:01.209675  181375 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:24:01.209793  181375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:24:01.213951  181375 start.go:564] Will wait 60s for crictl version
	I1202 20:24:01.214079  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.218031  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:24:01.247963  181375 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:24:01.248112  181375 ssh_runner.go:195] Run: crio --version
	I1202 20:24:01.275160  181375 ssh_runner.go:195] Run: crio --version
	I1202 20:24:01.305775  181375 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.2 ...
	I1202 20:24:01.308773  181375 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-080046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:24:01.325132  181375 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 20:24:01.329111  181375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:24:01.338828  181375 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-080046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-080046 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:24:01.338939  181375 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1202 20:24:01.338994  181375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:24:01.371326  181375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 20:24:01.371353  181375 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 20:24:01.371407  181375 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:24:01.371619  181375 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:01.371718  181375 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:01.371822  181375 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:01.371920  181375 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.372015  181375 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 20:24:01.372112  181375 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.372204  181375 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.373230  181375 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:24:01.374739  181375 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.375075  181375 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:01.375213  181375 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.375329  181375 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 20:24:01.375440  181375 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:01.375552  181375 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:01.375579  181375 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.699265  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.718104  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1202 20:24:01.736575  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.737800  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:01.741826  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.756824  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:01.759631  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:01.765001  181375 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 20:24:01.765047  181375 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.765122  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.800949  181375 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 20:24:01.800992  181375 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 20:24:01.801102  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.883252  181375 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 20:24:01.883353  181375 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 20:24:01.883376  181375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:01.883456  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.883565  181375 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 20:24:01.883600  181375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.883631  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.883689  181375 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.883734  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.898486  181375 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 20:24:01.898545  181375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:01.898603  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.898662  181375 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 20:24:01.898701  181375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:01.898746  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.898784  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:01.898834  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:24:01.898908  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.898948  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:01.899009  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.990353  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:01.990459  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:01.990542  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:01.990688  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:24:01.990703  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:01.990895  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:01.990898  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:02.108496  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 20:24:02.108622  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:02.108714  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:02.108802  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 20:24:02.108908  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 20:24:02.109016  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 20:24:02.109125  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 20:24:02.217276  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 20:24:02.217520  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 20:24:02.217604  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:24:02.217629  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:24:02.217370  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 20:24:02.217777  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:24:02.217437  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 20:24:02.217461  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 20:24:02.217950  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 20:24:02.217467  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 20:24:02.217484  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 20:24:02.218055  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:24:02.252694  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 20:24:02.252798  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:24:02.252860  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 20:24:02.252877  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 20:24:02.252921  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 20:24:02.252938  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 20:24:02.252980  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 20:24:02.252994  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 20:24:02.253054  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 20:24:02.253073  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 20:24:02.274664  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 20:24:02.274755  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 20:24:02.274688  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 20:24:02.274975  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:24:02.276019  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 20:24:02.276049  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 20:24:02.344321  181375 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 20:24:02.344452  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1202 20:24:02.452894  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 20:24:02.452931  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	W1202 20:24:02.563313  181375 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 20:24:02.563558  181375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:24:03.069017  181375 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 20:24:03.069064  181375 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:24:03.069114  181375 ssh_runner.go:195] Run: which crictl
	I1202 20:24:03.069757  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 20:24:03.069873  181375 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:24:03.069949  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 20:24:03.147273  181375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:24:05.242193  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.172215292s)
	I1202 20:24:05.242223  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 20:24:05.242241  181375 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:24:05.242294  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1202 20:24:05.242354  181375 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.095055003s)
	I1202 20:24:05.242367  181375 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 20:24:05.242432  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:24:06.881631  181375 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.639175887s)
	I1202 20:24:06.881717  181375 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 20:24:06.881749  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 20:24:06.881884  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.639573693s)
	I1202 20:24:06.881900  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 20:24:06.881917  181375 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:24:06.881959  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 20:24:08.496880  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.61489466s)
	I1202 20:24:08.496904  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 20:24:08.496921  181375 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:24:08.496979  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1202 20:24:11.089058  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.592059192s)
	I1202 20:24:11.089083  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 20:24:11.089100  181375 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:24:11.089148  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 20:24:12.963598  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.874429981s)
	I1202 20:24:12.963622  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 20:24:12.963640  181375 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:24:12.963687  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 20:24:15.324898  181375 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.361180892s)
	I1202 20:24:15.324925  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 20:24:15.324942  181375 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:24:15.324989  181375 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1202 20:24:16.114150  181375 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2526/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 20:24:16.114189  181375 cache_images.go:125] Successfully loaded all cached images
	I1202 20:24:16.114196  181375 cache_images.go:94] duration metric: took 14.742828761s to LoadCachedImages
	I1202 20:24:16.114208  181375 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1202 20:24:16.114320  181375 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-080046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-080046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:24:16.114406  181375 ssh_runner.go:195] Run: crio config
	I1202 20:24:16.212440  181375 cni.go:84] Creating CNI manager for ""
	I1202 20:24:16.212466  181375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:24:16.212490  181375 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:24:16.212513  181375 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-080046 NodeName:kubernetes-upgrade-080046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:24:16.212635  181375 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-080046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:24:16.212713  181375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:24:16.229276  181375 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 20:24:16.229342  181375 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 20:24:16.238767  181375 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 20:24:16.238859  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 20:24:16.238940  181375 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 20:24:16.238974  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:24:16.239047  181375 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 20:24:16.239095  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 20:24:16.263899  181375 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 20:24:16.263974  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 20:24:16.264065  181375 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 20:24:16.264096  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 20:24:16.264243  181375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 20:24:16.315264  181375 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 20:24:16.315351  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 20:24:17.378744  181375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:24:17.395660  181375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1202 20:24:17.415789  181375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 20:24:17.434807  181375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1202 20:24:17.456450  181375 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:24:17.462822  181375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:24:17.479055  181375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:24:17.674322  181375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:24:17.720693  181375 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046 for IP: 192.168.76.2
	I1202 20:24:17.720717  181375 certs.go:195] generating shared ca certs ...
	I1202 20:24:17.720733  181375 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:24:17.720906  181375 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 20:24:17.720953  181375 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 20:24:17.720965  181375 certs.go:257] generating profile certs ...
	I1202 20:24:17.721105  181375 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/client.key
	I1202 20:24:17.721189  181375 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/apiserver.key.7f07ec38
	I1202 20:24:17.721249  181375 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/proxy-client.key
	I1202 20:24:17.721377  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 20:24:17.721420  181375 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 20:24:17.721433  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:24:17.721468  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:24:17.721498  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:24:17.721526  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 20:24:17.721581  181375 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:24:17.722209  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:24:17.763296  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:24:17.807502  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:24:17.854883  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 20:24:17.893274  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 20:24:17.919400  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:24:17.944334  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:24:17.983600  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:24:18.014111  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:24:18.054761  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 20:24:18.084800  181375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 20:24:18.126214  181375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:24:18.150110  181375 ssh_runner.go:195] Run: openssl version
	I1202 20:24:18.157351  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:24:18.168856  181375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:24:18.172891  181375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:24:18.172966  181375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:24:18.238821  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:24:18.248600  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 20:24:18.257289  181375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 20:24:18.262076  181375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 20:24:18.262139  181375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 20:24:18.313389  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 20:24:18.322216  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 20:24:18.330732  181375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 20:24:18.335147  181375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 20:24:18.335255  181375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 20:24:18.386666  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:24:18.398709  181375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:24:18.405496  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:24:18.462535  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:24:18.509443  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:24:18.567063  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:24:18.662160  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:24:18.748661  181375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:24:18.794521  181375 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-080046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-080046 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:24:18.794629  181375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:24:18.794719  181375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:24:18.836180  181375 cri.go:89] found id: ""
	I1202 20:24:18.836264  181375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:24:18.845227  181375 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:24:18.845247  181375 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:24:18.845314  181375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:24:18.854221  181375 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:24:18.854832  181375 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-080046" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:24:18.855111  181375 kubeconfig.go:62] /home/jenkins/minikube-integration/22021-2526/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-080046" cluster setting kubeconfig missing "kubernetes-upgrade-080046" context setting]
	I1202 20:24:18.855604  181375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:24:18.856447  181375 kapi.go:59] client config for kubernetes-upgrade-080046: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/kubernetes-upgrade-080046/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:24:18.857053  181375 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 20:24:18.857106  181375 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 20:24:18.857115  181375 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 20:24:18.857120  181375 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 20:24:18.857124  181375 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 20:24:18.857438  181375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:24:18.867916  181375 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 20:23:34.838026948 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 20:24:17.452790533 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-080046"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1202 20:24:18.867946  181375 kubeadm.go:1161] stopping kube-system containers ...
	I1202 20:24:18.867959  181375 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 20:24:18.868023  181375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:24:18.900208  181375 cri.go:89] found id: ""
	I1202 20:24:18.900286  181375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 20:24:18.920184  181375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:24:18.935026  181375 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  2 20:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  2 20:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  2 20:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  2 20:23 /etc/kubernetes/scheduler.conf
	
	I1202 20:24:18.935106  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:24:18.949060  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:24:18.966296  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:24:18.976781  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:24:18.976854  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:24:18.988452  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:24:19.000479  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:24:19.000555  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:24:19.016647  181375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:24:19.028486  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:24:19.106751  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:24:20.057856  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:24:20.392851  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:24:20.544680  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:24:20.606860  181375 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:24:20.606949  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:21.107793  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:21.607353  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:22.107605  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:22.607548  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:23.107134  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:23.607139  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:24.107689  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:24.607990  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:25.107640  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:25.607076  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:26.107576  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:26.607819  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:27.107103  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:27.607983  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:28.107901  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:28.607085  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:29.107243  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:29.607800  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:30.107094  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:30.607074  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:31.107683  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:31.607738  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:32.107204  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:32.607075  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:33.107859  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:33.607828  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:34.107411  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:34.607833  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:35.107125  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:35.607112  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:36.107977  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:36.607060  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:37.107188  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:37.607081  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:38.107749  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:38.607762  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:39.107874  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:39.607302  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:40.107457  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:40.607458  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:41.107816  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:41.607865  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:42.107161  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:42.607726  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:43.107845  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:43.607027  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:44.108016  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:44.607984  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:45.107709  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:45.607081  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:46.107603  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:46.606999  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:47.107824  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:47.607604  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:48.107650  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:48.607663  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:49.107957  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:49.607431  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:50.107087  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:50.607365  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:51.107880  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:51.607112  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:52.107069  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:52.607575  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:53.107096  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:53.607639  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:54.107073  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:54.607058  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:55.107041  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:55.607062  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:56.107843  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:56.607403  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:57.107762  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:57.607704  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:58.107236  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:58.607220  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:59.107074  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:24:59.607421  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:00.107905  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:00.608009  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:01.107776  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:01.607291  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:02.107348  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:02.607058  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:03.107638  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:03.607560  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:04.107119  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:04.607068  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:05.107089  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:05.607066  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:06.107863  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:06.607115  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:07.107124  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:07.607086  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:08.107451  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:08.607123  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:09.107086  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:09.607796  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:10.107109  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:10.607987  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:11.107247  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:11.607062  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:12.107073  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:12.607107  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:13.107936  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:13.607106  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:14.107113  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:14.608053  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:15.107542  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:15.607081  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:16.107001  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:16.607044  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:17.107898  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:17.607094  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:18.107821  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:18.607130  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:19.107041  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:19.607853  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:20.107277  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:20.607828  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:20.607899  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:20.649674  181375 cri.go:89] found id: ""
	I1202 20:25:20.649694  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.649703  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:20.649709  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:20.649762  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:20.685057  181375 cri.go:89] found id: ""
	I1202 20:25:20.685079  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.685087  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:20.685107  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:20.685161  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:20.718268  181375 cri.go:89] found id: ""
	I1202 20:25:20.718299  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.718307  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:20.718314  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:20.718371  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:20.751971  181375 cri.go:89] found id: ""
	I1202 20:25:20.751992  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.752000  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:20.752007  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:20.752060  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:20.802381  181375 cri.go:89] found id: ""
	I1202 20:25:20.802401  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.802410  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:20.802416  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:20.802480  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:20.848002  181375 cri.go:89] found id: ""
	I1202 20:25:20.848022  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.848030  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:20.848036  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:20.848091  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:20.888412  181375 cri.go:89] found id: ""
	I1202 20:25:20.888433  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.888440  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:20.888447  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:20.888510  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:20.929822  181375 cri.go:89] found id: ""
	I1202 20:25:20.929891  181375 logs.go:282] 0 containers: []
	W1202 20:25:20.929913  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:20.929935  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:20.929974  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:21.038435  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:21.038476  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:21.060898  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:21.060989  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:21.156707  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:21.156740  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:21.156754  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:21.201825  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:21.201858  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:23.737134  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:23.755761  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:23.755865  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:23.823970  181375 cri.go:89] found id: ""
	I1202 20:25:23.824043  181375 logs.go:282] 0 containers: []
	W1202 20:25:23.824066  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:23.824085  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:23.824193  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:23.870278  181375 cri.go:89] found id: ""
	I1202 20:25:23.870356  181375 logs.go:282] 0 containers: []
	W1202 20:25:23.870388  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:23.870409  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:23.870517  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:23.931395  181375 cri.go:89] found id: ""
	I1202 20:25:23.931433  181375 logs.go:282] 0 containers: []
	W1202 20:25:23.931442  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:23.931448  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:23.931515  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:23.977339  181375 cri.go:89] found id: ""
	I1202 20:25:23.977379  181375 logs.go:282] 0 containers: []
	W1202 20:25:23.977387  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:23.977394  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:23.977465  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:24.020153  181375 cri.go:89] found id: ""
	I1202 20:25:24.020182  181375 logs.go:282] 0 containers: []
	W1202 20:25:24.020190  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:24.020197  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:24.020265  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:24.079250  181375 cri.go:89] found id: ""
	I1202 20:25:24.079290  181375 logs.go:282] 0 containers: []
	W1202 20:25:24.079300  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:24.079307  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:24.079375  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:24.122053  181375 cri.go:89] found id: ""
	I1202 20:25:24.122096  181375 logs.go:282] 0 containers: []
	W1202 20:25:24.122106  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:24.122113  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:24.122189  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:24.170638  181375 cri.go:89] found id: ""
	I1202 20:25:24.170674  181375 logs.go:282] 0 containers: []
	W1202 20:25:24.170683  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:24.170691  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:24.170703  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:24.333551  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:24.333581  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:24.333600  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:24.431354  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:24.431430  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:24.515316  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:24.515393  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:24.614071  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:24.614107  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:27.140478  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:27.150859  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:27.150930  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:27.177343  181375 cri.go:89] found id: ""
	I1202 20:25:27.177368  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.177386  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:27.177394  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:27.177491  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:27.202406  181375 cri.go:89] found id: ""
	I1202 20:25:27.202430  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.202448  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:27.202455  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:27.202511  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:27.231130  181375 cri.go:89] found id: ""
	I1202 20:25:27.231153  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.231161  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:27.231168  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:27.231236  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:27.256228  181375 cri.go:89] found id: ""
	I1202 20:25:27.256256  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.256266  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:27.256272  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:27.256332  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:27.282428  181375 cri.go:89] found id: ""
	I1202 20:25:27.282458  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.282467  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:27.282473  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:27.282531  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:27.320057  181375 cri.go:89] found id: ""
	I1202 20:25:27.320089  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.320098  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:27.320105  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:27.320172  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:27.350692  181375 cri.go:89] found id: ""
	I1202 20:25:27.350718  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.350734  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:27.350741  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:27.350797  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:27.381385  181375 cri.go:89] found id: ""
	I1202 20:25:27.381410  181375 logs.go:282] 0 containers: []
	W1202 20:25:27.381419  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:27.381437  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:27.381449  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:27.423779  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:27.423817  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:27.459864  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:27.459894  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:27.526931  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:27.526964  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:27.541424  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:27.541453  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:27.617597  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:30.118565  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:30.130000  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:30.130104  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:30.161340  181375 cri.go:89] found id: ""
	I1202 20:25:30.161366  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.161376  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:30.161384  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:30.161443  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:30.188312  181375 cri.go:89] found id: ""
	I1202 20:25:30.188336  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.188345  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:30.188351  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:30.188413  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:30.215064  181375 cri.go:89] found id: ""
	I1202 20:25:30.215089  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.215098  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:30.215105  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:30.215163  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:30.242474  181375 cri.go:89] found id: ""
	I1202 20:25:30.242500  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.242509  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:30.242515  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:30.242573  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:30.270672  181375 cri.go:89] found id: ""
	I1202 20:25:30.270693  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.270701  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:30.270708  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:30.270781  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:30.304289  181375 cri.go:89] found id: ""
	I1202 20:25:30.304311  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.304319  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:30.304326  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:30.304386  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:30.335115  181375 cri.go:89] found id: ""
	I1202 20:25:30.335142  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.335151  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:30.335157  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:30.335215  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:30.370256  181375 cri.go:89] found id: ""
	I1202 20:25:30.370286  181375 logs.go:282] 0 containers: []
	W1202 20:25:30.370301  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:30.370310  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:30.370339  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:30.437787  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:30.437822  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:30.452318  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:30.452347  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:30.518985  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:30.519056  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:30.519083  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:30.558913  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:30.558948  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:33.092019  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:33.102381  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:33.102454  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:33.127536  181375 cri.go:89] found id: ""
	I1202 20:25:33.127561  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.127570  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:33.127576  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:33.127659  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:33.152794  181375 cri.go:89] found id: ""
	I1202 20:25:33.152819  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.152827  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:33.152834  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:33.152894  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:33.181686  181375 cri.go:89] found id: ""
	I1202 20:25:33.181709  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.181718  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:33.181728  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:33.181787  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:33.208344  181375 cri.go:89] found id: ""
	I1202 20:25:33.208364  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.208373  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:33.208379  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:33.208433  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:33.233976  181375 cri.go:89] found id: ""
	I1202 20:25:33.233999  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.234007  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:33.234014  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:33.234072  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:33.259716  181375 cri.go:89] found id: ""
	I1202 20:25:33.259739  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.259748  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:33.259758  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:33.259819  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:33.284773  181375 cri.go:89] found id: ""
	I1202 20:25:33.284797  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.284814  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:33.284820  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:33.284876  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:33.321866  181375 cri.go:89] found id: ""
	I1202 20:25:33.321903  181375 logs.go:282] 0 containers: []
	W1202 20:25:33.321914  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:33.321923  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:33.321939  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:33.360231  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:33.360259  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:33.433776  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:33.433812  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:33.447853  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:33.447880  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:33.515131  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:33.515158  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:33.515172  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:36.058530  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:36.068929  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:36.068995  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:36.094495  181375 cri.go:89] found id: ""
	I1202 20:25:36.094529  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.094539  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:36.094545  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:36.094606  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:36.121152  181375 cri.go:89] found id: ""
	I1202 20:25:36.121173  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.121181  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:36.121186  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:36.121252  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:36.146611  181375 cri.go:89] found id: ""
	I1202 20:25:36.146634  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.146643  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:36.146652  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:36.146715  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:36.179226  181375 cri.go:89] found id: ""
	I1202 20:25:36.179251  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.179259  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:36.179265  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:36.179323  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:36.206398  181375 cri.go:89] found id: ""
	I1202 20:25:36.206430  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.206439  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:36.206446  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:36.206546  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:36.237200  181375 cri.go:89] found id: ""
	I1202 20:25:36.237261  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.237276  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:36.237284  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:36.237342  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:36.263405  181375 cri.go:89] found id: ""
	I1202 20:25:36.263427  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.263436  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:36.263443  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:36.263498  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:36.288731  181375 cri.go:89] found id: ""
	I1202 20:25:36.288755  181375 logs.go:282] 0 containers: []
	W1202 20:25:36.288764  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:36.288778  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:36.288791  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:36.331778  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:36.331801  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:36.416752  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:36.416788  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:36.431297  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:36.431326  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:36.495624  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:36.495644  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:36.495658  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:39.038528  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:39.049059  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:39.049126  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:39.074362  181375 cri.go:89] found id: ""
	I1202 20:25:39.074395  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.074404  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:39.074410  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:39.074469  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:39.102386  181375 cri.go:89] found id: ""
	I1202 20:25:39.102410  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.102418  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:39.102424  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:39.102480  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:39.126309  181375 cri.go:89] found id: ""
	I1202 20:25:39.126331  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.126340  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:39.126345  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:39.126408  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:39.154817  181375 cri.go:89] found id: ""
	I1202 20:25:39.154840  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.154848  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:39.154854  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:39.154914  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:39.184404  181375 cri.go:89] found id: ""
	I1202 20:25:39.184471  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.184495  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:39.184517  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:39.184605  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:39.210580  181375 cri.go:89] found id: ""
	I1202 20:25:39.210602  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.210610  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:39.210617  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:39.210673  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:39.237353  181375 cri.go:89] found id: ""
	I1202 20:25:39.237376  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.237383  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:39.237391  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:39.237455  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:39.263998  181375 cri.go:89] found id: ""
	I1202 20:25:39.264026  181375 logs.go:282] 0 containers: []
	W1202 20:25:39.264035  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:39.264044  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:39.264056  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:39.332627  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:39.332663  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:39.350653  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:39.350682  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:39.423382  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:39.423402  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:39.423416  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:39.463487  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:39.463518  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:41.991216  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:42.000898  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:42.000965  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:42.039206  181375 cri.go:89] found id: ""
	I1202 20:25:42.039230  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.039239  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:42.039245  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:42.039312  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:42.067777  181375 cri.go:89] found id: ""
	I1202 20:25:42.067802  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.067821  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:42.067828  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:42.067896  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:42.100780  181375 cri.go:89] found id: ""
	I1202 20:25:42.100807  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.100817  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:42.100825  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:42.100896  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:42.141094  181375 cri.go:89] found id: ""
	I1202 20:25:42.141119  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.141129  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:42.141137  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:42.141227  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:42.186433  181375 cri.go:89] found id: ""
	I1202 20:25:42.186458  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.186468  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:42.186476  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:42.186545  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:42.222986  181375 cri.go:89] found id: ""
	I1202 20:25:42.223019  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.223034  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:42.223042  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:42.223122  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:42.257121  181375 cri.go:89] found id: ""
	I1202 20:25:42.257154  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.257163  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:42.257176  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:42.257251  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:42.289129  181375 cri.go:89] found id: ""
	I1202 20:25:42.289210  181375 logs.go:282] 0 containers: []
	W1202 20:25:42.289239  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:42.289263  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:42.289312  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:42.339351  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:42.339434  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:42.418937  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:42.418974  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:42.434396  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:42.434423  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:42.496476  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:42.496498  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:42.496512  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:45.037215  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:45.052749  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:45.052844  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:45.103166  181375 cri.go:89] found id: ""
	I1202 20:25:45.103192  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.103202  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:45.103209  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:45.103280  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:45.157598  181375 cri.go:89] found id: ""
	I1202 20:25:45.157624  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.157633  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:45.157641  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:45.157752  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:45.229117  181375 cri.go:89] found id: ""
	I1202 20:25:45.229147  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.229159  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:45.229168  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:45.229250  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:45.290783  181375 cri.go:89] found id: ""
	I1202 20:25:45.290809  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.290819  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:45.290827  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:45.290913  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:45.347394  181375 cri.go:89] found id: ""
	I1202 20:25:45.347417  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.347425  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:45.347432  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:45.347522  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:45.387366  181375 cri.go:89] found id: ""
	I1202 20:25:45.387400  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.387411  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:45.387419  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:45.387482  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:45.417550  181375 cri.go:89] found id: ""
	I1202 20:25:45.417574  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.417584  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:45.417590  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:45.417674  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:45.448647  181375 cri.go:89] found id: ""
	I1202 20:25:45.448673  181375 logs.go:282] 0 containers: []
	W1202 20:25:45.448682  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:45.448690  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:45.448702  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:45.515547  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:45.515580  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:45.531186  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:45.531222  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:45.595922  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:45.595941  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:45.595954  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:45.635818  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:45.635852  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:48.168327  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:48.178533  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:48.178604  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:48.203301  181375 cri.go:89] found id: ""
	I1202 20:25:48.203323  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.203332  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:48.203338  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:48.203393  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:48.227358  181375 cri.go:89] found id: ""
	I1202 20:25:48.227380  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.227388  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:48.227395  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:48.227449  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:48.256599  181375 cri.go:89] found id: ""
	I1202 20:25:48.256621  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.256629  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:48.256635  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:48.256691  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:48.280258  181375 cri.go:89] found id: ""
	I1202 20:25:48.280283  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.280292  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:48.280298  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:48.280354  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:48.319093  181375 cri.go:89] found id: ""
	I1202 20:25:48.319118  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.319127  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:48.319133  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:48.319196  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:48.355894  181375 cri.go:89] found id: ""
	I1202 20:25:48.355918  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.355927  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:48.355934  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:48.355991  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:48.384339  181375 cri.go:89] found id: ""
	I1202 20:25:48.384364  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.384373  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:48.384380  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:48.384436  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:48.410938  181375 cri.go:89] found id: ""
	I1202 20:25:48.410960  181375 logs.go:282] 0 containers: []
	W1202 20:25:48.410967  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:48.411019  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:48.411073  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:48.477549  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:48.477580  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:48.491716  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:48.491745  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:48.562659  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:48.562679  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:48.562700  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:48.610181  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:48.610221  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:51.142932  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:51.153619  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:51.153715  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:51.180779  181375 cri.go:89] found id: ""
	I1202 20:25:51.180800  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.180808  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:51.180814  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:51.180871  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:51.206099  181375 cri.go:89] found id: ""
	I1202 20:25:51.206124  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.206132  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:51.206139  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:51.206200  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:51.231721  181375 cri.go:89] found id: ""
	I1202 20:25:51.231745  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.231753  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:51.231760  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:51.231865  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:51.256676  181375 cri.go:89] found id: ""
	I1202 20:25:51.256699  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.256708  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:51.256714  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:51.256768  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:51.280386  181375 cri.go:89] found id: ""
	I1202 20:25:51.280407  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.280415  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:51.280422  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:51.280479  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:51.316176  181375 cri.go:89] found id: ""
	I1202 20:25:51.316200  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.316209  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:51.316215  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:51.316270  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:51.342553  181375 cri.go:89] found id: ""
	I1202 20:25:51.342577  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.342585  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:51.342591  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:51.342657  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:51.372126  181375 cri.go:89] found id: ""
	I1202 20:25:51.372147  181375 logs.go:282] 0 containers: []
	W1202 20:25:51.372156  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:51.372165  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:51.372176  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:51.401240  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:51.401269  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:51.468689  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:51.468722  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:51.483061  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:51.483088  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:51.551001  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:51.551021  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:51.551033  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:54.093379  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:54.104202  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:54.104270  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:54.153043  181375 cri.go:89] found id: ""
	I1202 20:25:54.153074  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.153083  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:54.153090  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:54.153148  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:54.190477  181375 cri.go:89] found id: ""
	I1202 20:25:54.190501  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.190510  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:54.190517  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:54.190579  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:54.215920  181375 cri.go:89] found id: ""
	I1202 20:25:54.215942  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.215950  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:54.215956  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:54.216020  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:54.241766  181375 cri.go:89] found id: ""
	I1202 20:25:54.241787  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.241796  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:54.241802  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:54.241861  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:54.266606  181375 cri.go:89] found id: ""
	I1202 20:25:54.266627  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.266636  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:54.266643  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:54.266701  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:54.299197  181375 cri.go:89] found id: ""
	I1202 20:25:54.299271  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.299302  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:54.299323  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:54.299395  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:54.335736  181375 cri.go:89] found id: ""
	I1202 20:25:54.335757  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.335766  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:54.335772  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:54.335830  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:54.370862  181375 cri.go:89] found id: ""
	I1202 20:25:54.370889  181375 logs.go:282] 0 containers: []
	W1202 20:25:54.370897  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:54.370906  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:54.370917  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:54.436912  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:54.436933  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:54.436945  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:54.478136  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:54.478171  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:25:54.505170  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:54.505196  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:54.575751  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:54.575787  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:57.090267  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:25:57.100499  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:25:57.100567  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:25:57.125908  181375 cri.go:89] found id: ""
	I1202 20:25:57.125930  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.125939  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:25:57.125947  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:25:57.126005  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:25:57.150856  181375 cri.go:89] found id: ""
	I1202 20:25:57.150878  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.150886  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:25:57.150892  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:25:57.150952  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:25:57.178036  181375 cri.go:89] found id: ""
	I1202 20:25:57.178058  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.178066  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:25:57.178073  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:25:57.178130  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:25:57.203827  181375 cri.go:89] found id: ""
	I1202 20:25:57.203849  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.203857  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:25:57.203863  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:25:57.203918  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:25:57.230094  181375 cri.go:89] found id: ""
	I1202 20:25:57.230116  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.230125  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:25:57.230131  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:25:57.230188  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:25:57.254262  181375 cri.go:89] found id: ""
	I1202 20:25:57.254284  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.254292  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:25:57.254299  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:25:57.254354  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:25:57.280277  181375 cri.go:89] found id: ""
	I1202 20:25:57.280301  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.280311  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:25:57.280317  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:25:57.280374  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:25:57.313278  181375 cri.go:89] found id: ""
	I1202 20:25:57.313303  181375 logs.go:282] 0 containers: []
	W1202 20:25:57.313311  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:25:57.313320  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:25:57.313331  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:25:57.386083  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:25:57.386121  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:25:57.400717  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:25:57.400745  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:25:57.463780  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:25:57.463800  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:25:57.463812  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:25:57.507231  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:25:57.507265  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:00.037999  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:00.111223  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:00.111316  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:00.236880  181375 cri.go:89] found id: ""
	I1202 20:26:00.236908  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.236929  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:00.236937  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:00.237013  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:00.344266  181375 cri.go:89] found id: ""
	I1202 20:26:00.344305  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.344314  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:00.344321  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:00.344392  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:00.379438  181375 cri.go:89] found id: ""
	I1202 20:26:00.379461  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.379469  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:00.379476  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:00.379547  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:00.415490  181375 cri.go:89] found id: ""
	I1202 20:26:00.415512  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.415521  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:00.415528  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:00.415594  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:00.448926  181375 cri.go:89] found id: ""
	I1202 20:26:00.449012  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.449036  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:00.449073  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:00.449175  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:00.478823  181375 cri.go:89] found id: ""
	I1202 20:26:00.478917  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.478940  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:00.478977  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:00.479078  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:00.506522  181375 cri.go:89] found id: ""
	I1202 20:26:00.506546  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.506554  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:00.506560  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:00.506619  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:00.536450  181375 cri.go:89] found id: ""
	I1202 20:26:00.536474  181375 logs.go:282] 0 containers: []
	W1202 20:26:00.536482  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:00.536491  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:00.536504  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:00.578901  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:00.578935  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:00.609178  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:00.609249  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:00.682591  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:00.682630  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:00.697438  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:00.697511  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:00.764528  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:03.266226  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:03.276829  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:03.276897  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:03.311164  181375 cri.go:89] found id: ""
	I1202 20:26:03.311186  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.311194  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:03.311200  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:03.311264  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:03.339300  181375 cri.go:89] found id: ""
	I1202 20:26:03.339322  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.339331  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:03.339339  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:03.339398  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:03.368453  181375 cri.go:89] found id: ""
	I1202 20:26:03.368475  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.368483  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:03.368489  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:03.368547  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:03.394737  181375 cri.go:89] found id: ""
	I1202 20:26:03.394799  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.394822  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:03.394841  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:03.394904  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:03.420228  181375 cri.go:89] found id: ""
	I1202 20:26:03.420253  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.420261  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:03.420268  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:03.420327  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:03.449159  181375 cri.go:89] found id: ""
	I1202 20:26:03.449222  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.449245  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:03.449264  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:03.449354  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:03.475745  181375 cri.go:89] found id: ""
	I1202 20:26:03.475772  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.475781  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:03.475788  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:03.475846  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:03.501981  181375 cri.go:89] found id: ""
	I1202 20:26:03.502008  181375 logs.go:282] 0 containers: []
	W1202 20:26:03.502017  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:03.502026  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:03.502037  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:03.569709  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:03.569778  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:03.569804  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:03.609567  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:03.609604  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:03.642795  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:03.642823  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:03.709738  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:03.709772  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:06.224158  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:06.236882  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:06.236965  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:06.277712  181375 cri.go:89] found id: ""
	I1202 20:26:06.277733  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.277742  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:06.277748  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:06.277810  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:06.329065  181375 cri.go:89] found id: ""
	I1202 20:26:06.329086  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.329094  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:06.329101  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:06.329157  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:06.391797  181375 cri.go:89] found id: ""
	I1202 20:26:06.391820  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.391829  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:06.391836  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:06.391910  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:06.432381  181375 cri.go:89] found id: ""
	I1202 20:26:06.432416  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.432425  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:06.432432  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:06.432496  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:06.467859  181375 cri.go:89] found id: ""
	I1202 20:26:06.467940  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.467962  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:06.467981  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:06.468081  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:06.496636  181375 cri.go:89] found id: ""
	I1202 20:26:06.496670  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.496679  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:06.496686  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:06.496754  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:06.527882  181375 cri.go:89] found id: ""
	I1202 20:26:06.527907  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.527924  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:06.527931  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:06.528001  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:06.563792  181375 cri.go:89] found id: ""
	I1202 20:26:06.563817  181375 logs.go:282] 0 containers: []
	W1202 20:26:06.563833  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:06.563842  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:06.563854  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:06.639712  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:06.639748  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:06.654400  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:06.654429  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:06.736431  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:06.736453  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:06.736475  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:06.782105  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:06.782139  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:09.318771  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:09.329383  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:09.329449  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:09.361200  181375 cri.go:89] found id: ""
	I1202 20:26:09.361222  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.361231  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:09.361237  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:09.361294  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:09.387592  181375 cri.go:89] found id: ""
	I1202 20:26:09.387617  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.387626  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:09.387632  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:09.387693  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:09.416803  181375 cri.go:89] found id: ""
	I1202 20:26:09.416827  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.416836  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:09.416843  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:09.416904  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:09.444341  181375 cri.go:89] found id: ""
	I1202 20:26:09.444366  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.444375  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:09.444383  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:09.444441  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:09.472875  181375 cri.go:89] found id: ""
	I1202 20:26:09.472903  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.472911  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:09.472918  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:09.472975  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:09.499997  181375 cri.go:89] found id: ""
	I1202 20:26:09.500024  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.500032  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:09.500039  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:09.500095  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:09.525343  181375 cri.go:89] found id: ""
	I1202 20:26:09.525369  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.525377  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:09.525384  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:09.525446  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:09.550354  181375 cri.go:89] found id: ""
	I1202 20:26:09.550380  181375 logs.go:282] 0 containers: []
	W1202 20:26:09.550389  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:09.550397  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:09.550408  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:09.578977  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:09.579042  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:09.646337  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:09.646372  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:09.660754  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:09.660783  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:09.725267  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:09.725299  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:09.725315  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:12.265781  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:12.275636  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:12.275704  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:12.312616  181375 cri.go:89] found id: ""
	I1202 20:26:12.312637  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.312645  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:12.312652  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:12.312708  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:12.346779  181375 cri.go:89] found id: ""
	I1202 20:26:12.346799  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.346807  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:12.346813  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:12.346867  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:12.377897  181375 cri.go:89] found id: ""
	I1202 20:26:12.377918  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.377926  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:12.377932  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:12.377994  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:12.403969  181375 cri.go:89] found id: ""
	I1202 20:26:12.403991  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.403999  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:12.404006  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:12.404073  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:12.429717  181375 cri.go:89] found id: ""
	I1202 20:26:12.429738  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.429746  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:12.429756  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:12.429813  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:12.456038  181375 cri.go:89] found id: ""
	I1202 20:26:12.456059  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.456067  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:12.456073  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:12.456132  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:12.480566  181375 cri.go:89] found id: ""
	I1202 20:26:12.480587  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.480595  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:12.480601  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:12.480656  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:12.506457  181375 cri.go:89] found id: ""
	I1202 20:26:12.506477  181375 logs.go:282] 0 containers: []
	W1202 20:26:12.506485  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:12.506494  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:12.506504  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:12.577393  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:12.577429  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:12.591835  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:12.591862  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:12.656767  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:12.656826  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:12.656851  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:12.696011  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:12.696045  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:15.225572  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:15.235456  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:15.235528  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:15.264283  181375 cri.go:89] found id: ""
	I1202 20:26:15.264305  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.264313  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:15.264320  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:15.264379  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:15.290284  181375 cri.go:89] found id: ""
	I1202 20:26:15.290346  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.290369  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:15.290387  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:15.290457  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:15.327435  181375 cri.go:89] found id: ""
	I1202 20:26:15.327506  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.327536  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:15.327556  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:15.327640  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:15.366977  181375 cri.go:89] found id: ""
	I1202 20:26:15.367039  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.367063  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:15.367081  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:15.367146  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:15.392498  181375 cri.go:89] found id: ""
	I1202 20:26:15.392529  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.392538  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:15.392544  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:15.392610  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:15.418404  181375 cri.go:89] found id: ""
	I1202 20:26:15.418428  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.418437  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:15.418444  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:15.418500  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:15.444706  181375 cri.go:89] found id: ""
	I1202 20:26:15.444746  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.444755  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:15.444761  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:15.444833  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:15.470666  181375 cri.go:89] found id: ""
	I1202 20:26:15.470694  181375 logs.go:282] 0 containers: []
	W1202 20:26:15.470705  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:15.470714  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:15.470726  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:15.485195  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:15.485223  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:15.549837  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:15.549858  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:15.549871  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:15.590990  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:15.591023  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:15.620615  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:15.620642  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:18.191717  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:18.202696  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:18.202769  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:18.231886  181375 cri.go:89] found id: ""
	I1202 20:26:18.231910  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.231919  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:18.231925  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:18.231993  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:18.259809  181375 cri.go:89] found id: ""
	I1202 20:26:18.259833  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.259842  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:18.259849  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:18.259906  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:18.331855  181375 cri.go:89] found id: ""
	I1202 20:26:18.331877  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.331886  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:18.331892  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:18.331948  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:18.393179  181375 cri.go:89] found id: ""
	I1202 20:26:18.393200  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.393209  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:18.393215  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:18.393271  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:18.421350  181375 cri.go:89] found id: ""
	I1202 20:26:18.421420  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.421442  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:18.421461  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:18.421541  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:18.459963  181375 cri.go:89] found id: ""
	I1202 20:26:18.459983  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.459991  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:18.459998  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:18.460056  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:18.487124  181375 cri.go:89] found id: ""
	I1202 20:26:18.487143  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.487151  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:18.487158  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:18.487214  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:18.514979  181375 cri.go:89] found id: ""
	I1202 20:26:18.515000  181375 logs.go:282] 0 containers: []
	W1202 20:26:18.515008  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:18.515017  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:18.515028  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:18.591707  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:18.591783  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:18.608382  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:18.608515  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:18.697014  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:18.697082  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:18.697110  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:18.742828  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:18.742899  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:21.280850  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:21.291765  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:21.291829  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:21.323178  181375 cri.go:89] found id: ""
	I1202 20:26:21.323199  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.323207  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:21.323214  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:21.323273  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:21.360279  181375 cri.go:89] found id: ""
	I1202 20:26:21.360299  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.360308  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:21.360314  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:21.360372  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:21.385858  181375 cri.go:89] found id: ""
	I1202 20:26:21.385892  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.385901  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:21.385908  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:21.385971  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:21.411971  181375 cri.go:89] found id: ""
	I1202 20:26:21.412031  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.412054  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:21.412072  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:21.412154  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:21.436764  181375 cri.go:89] found id: ""
	I1202 20:26:21.436798  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.436807  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:21.436813  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:21.436880  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:21.462196  181375 cri.go:89] found id: ""
	I1202 20:26:21.462221  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.462230  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:21.462236  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:21.462299  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:21.488493  181375 cri.go:89] found id: ""
	I1202 20:26:21.488514  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.488522  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:21.488528  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:21.488589  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:21.514461  181375 cri.go:89] found id: ""
	I1202 20:26:21.514531  181375 logs.go:282] 0 containers: []
	W1202 20:26:21.514554  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:21.514576  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:21.514616  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:21.583519  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:21.583553  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:21.597906  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:21.597931  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:21.663591  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:21.663610  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:21.663623  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:21.703381  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:21.703417  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:24.234627  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:24.244855  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:24.244930  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:24.268916  181375 cri.go:89] found id: ""
	I1202 20:26:24.268987  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.269011  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:24.269030  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:24.269105  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:24.304234  181375 cri.go:89] found id: ""
	I1202 20:26:24.304296  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.304321  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:24.304340  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:24.304411  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:24.347027  181375 cri.go:89] found id: ""
	I1202 20:26:24.347053  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.347062  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:24.347101  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:24.347180  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:24.374552  181375 cri.go:89] found id: ""
	I1202 20:26:24.374572  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.374581  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:24.374587  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:24.374651  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:24.399666  181375 cri.go:89] found id: ""
	I1202 20:26:24.399685  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.399694  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:24.399700  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:24.399757  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:24.424175  181375 cri.go:89] found id: ""
	I1202 20:26:24.424196  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.424205  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:24.424211  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:24.424266  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:24.453207  181375 cri.go:89] found id: ""
	I1202 20:26:24.453227  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.453235  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:24.453242  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:24.453298  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:24.480400  181375 cri.go:89] found id: ""
	I1202 20:26:24.480420  181375 logs.go:282] 0 containers: []
	W1202 20:26:24.480428  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:24.480437  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:24.480449  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:24.520452  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:24.520488  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:24.554873  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:24.554901  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:24.625158  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:24.625195  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:24.640064  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:24.640094  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:24.707066  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:27.207726  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:27.217753  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:27.217837  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:27.242917  181375 cri.go:89] found id: ""
	I1202 20:26:27.242989  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.243004  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:27.243012  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:27.243068  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:27.268813  181375 cri.go:89] found id: ""
	I1202 20:26:27.268835  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.268844  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:27.268851  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:27.268914  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:27.296800  181375 cri.go:89] found id: ""
	I1202 20:26:27.296824  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.296833  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:27.296839  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:27.296896  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:27.330457  181375 cri.go:89] found id: ""
	I1202 20:26:27.330485  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.330496  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:27.330503  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:27.330571  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:27.362464  181375 cri.go:89] found id: ""
	I1202 20:26:27.362539  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.362565  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:27.362584  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:27.362682  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:27.389669  181375 cri.go:89] found id: ""
	I1202 20:26:27.389693  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.389702  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:27.389708  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:27.389764  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:27.416720  181375 cri.go:89] found id: ""
	I1202 20:26:27.416747  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.416756  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:27.416762  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:27.416830  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:27.442915  181375 cri.go:89] found id: ""
	I1202 20:26:27.442940  181375 logs.go:282] 0 containers: []
	W1202 20:26:27.442949  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:27.442957  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:27.442969  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:27.485923  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:27.485969  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:27.515160  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:27.515185  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:27.586346  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:27.586385  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:27.600859  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:27.600892  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:27.669811  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:30.170869  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:30.181518  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:30.181590  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:30.210180  181375 cri.go:89] found id: ""
	I1202 20:26:30.210208  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.210218  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:30.210226  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:30.210291  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:30.237192  181375 cri.go:89] found id: ""
	I1202 20:26:30.237217  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.237225  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:30.237232  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:30.237297  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:30.263013  181375 cri.go:89] found id: ""
	I1202 20:26:30.263038  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.263047  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:30.263054  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:30.263115  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:30.299001  181375 cri.go:89] found id: ""
	I1202 20:26:30.299024  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.299042  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:30.299049  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:30.299107  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:30.330623  181375 cri.go:89] found id: ""
	I1202 20:26:30.330650  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.330659  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:30.330666  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:30.330727  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:30.366899  181375 cri.go:89] found id: ""
	I1202 20:26:30.366924  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.366938  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:30.366945  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:30.367004  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:30.396572  181375 cri.go:89] found id: ""
	I1202 20:26:30.396599  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.396608  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:30.396615  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:30.396675  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:30.421580  181375 cri.go:89] found id: ""
	I1202 20:26:30.421605  181375 logs.go:282] 0 containers: []
	W1202 20:26:30.421614  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:30.421622  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:30.421634  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:30.489461  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:30.489497  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:30.504568  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:30.504641  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:30.570405  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:30.570430  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:30.570442  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:30.613527  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:30.613569  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:33.146092  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:33.157674  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:33.157747  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:33.191898  181375 cri.go:89] found id: ""
	I1202 20:26:33.191918  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.191926  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:33.191933  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:33.191992  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:33.218250  181375 cri.go:89] found id: ""
	I1202 20:26:33.218317  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.218330  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:33.218337  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:33.218397  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:33.247844  181375 cri.go:89] found id: ""
	I1202 20:26:33.247865  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.247874  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:33.247887  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:33.247944  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:33.274019  181375 cri.go:89] found id: ""
	I1202 20:26:33.274042  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.274051  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:33.274063  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:33.274124  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:33.315869  181375 cri.go:89] found id: ""
	I1202 20:26:33.315894  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.315903  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:33.315909  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:33.315970  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:33.345628  181375 cri.go:89] found id: ""
	I1202 20:26:33.345677  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.345687  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:33.345693  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:33.345754  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:33.374615  181375 cri.go:89] found id: ""
	I1202 20:26:33.374639  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.374648  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:33.374654  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:33.374711  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:33.405291  181375 cri.go:89] found id: ""
	I1202 20:26:33.405315  181375 logs.go:282] 0 containers: []
	W1202 20:26:33.405323  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:33.405332  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:33.405344  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:33.474896  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:33.474934  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:33.490142  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:33.490169  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:33.557919  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:33.557941  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:33.557955  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:33.598662  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:33.598693  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:36.126642  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:36.136789  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:36.136866  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:36.177397  181375 cri.go:89] found id: ""
	I1202 20:26:36.177421  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.177430  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:36.177437  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:36.177494  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:36.205187  181375 cri.go:89] found id: ""
	I1202 20:26:36.205212  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.205221  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:36.205228  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:36.205290  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:36.232184  181375 cri.go:89] found id: ""
	I1202 20:26:36.232211  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.232220  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:36.232227  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:36.232287  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:36.259623  181375 cri.go:89] found id: ""
	I1202 20:26:36.259646  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.259655  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:36.259665  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:36.259724  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:36.284819  181375 cri.go:89] found id: ""
	I1202 20:26:36.284843  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.284851  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:36.284858  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:36.284920  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:36.313112  181375 cri.go:89] found id: ""
	I1202 20:26:36.313134  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.313142  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:36.313148  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:36.313208  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:36.338070  181375 cri.go:89] found id: ""
	I1202 20:26:36.338100  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.338109  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:36.338117  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:36.338178  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:36.365156  181375 cri.go:89] found id: ""
	I1202 20:26:36.365182  181375 logs.go:282] 0 containers: []
	W1202 20:26:36.365190  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:36.365199  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:36.365211  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:36.432340  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:36.432377  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:36.446758  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:36.446786  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:36.514389  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:36.514410  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:36.514425  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:36.558725  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:36.558760  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:39.089687  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:39.101180  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:39.101251  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:39.150772  181375 cri.go:89] found id: ""
	I1202 20:26:39.150798  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.150808  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:39.150814  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:39.150871  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:39.195777  181375 cri.go:89] found id: ""
	I1202 20:26:39.195802  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.195810  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:39.195820  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:39.195881  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:39.240558  181375 cri.go:89] found id: ""
	I1202 20:26:39.240591  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.240600  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:39.240607  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:39.240667  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:39.283707  181375 cri.go:89] found id: ""
	I1202 20:26:39.283729  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.283739  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:39.283745  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:39.283799  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:39.369096  181375 cri.go:89] found id: ""
	I1202 20:26:39.369119  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.369127  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:39.369134  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:39.369187  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:39.427100  181375 cri.go:89] found id: ""
	I1202 20:26:39.427126  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.427136  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:39.427143  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:39.427199  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:39.470980  181375 cri.go:89] found id: ""
	I1202 20:26:39.471005  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.471013  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:39.471019  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:39.471087  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:39.536042  181375 cri.go:89] found id: ""
	I1202 20:26:39.536063  181375 logs.go:282] 0 containers: []
	W1202 20:26:39.536071  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:39.536081  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:39.536093  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:39.620567  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:39.620604  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:39.639218  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:39.639248  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:39.770985  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:39.771008  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:39.771026  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:39.830278  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:39.830321  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:42.382407  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:42.394233  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:42.394374  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:42.423788  181375 cri.go:89] found id: ""
	I1202 20:26:42.423813  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.423822  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:42.423830  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:42.423891  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:42.454267  181375 cri.go:89] found id: ""
	I1202 20:26:42.454289  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.454298  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:42.454304  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:42.454360  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:42.484540  181375 cri.go:89] found id: ""
	I1202 20:26:42.484565  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.484573  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:42.484579  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:42.484638  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:42.510839  181375 cri.go:89] found id: ""
	I1202 20:26:42.510861  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.510869  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:42.510875  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:42.510930  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:42.536639  181375 cri.go:89] found id: ""
	I1202 20:26:42.536663  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.536672  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:42.536679  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:42.536740  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:42.562633  181375 cri.go:89] found id: ""
	I1202 20:26:42.562655  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.562663  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:42.562670  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:42.562726  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:42.586861  181375 cri.go:89] found id: ""
	I1202 20:26:42.586885  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.586899  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:42.586905  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:42.586964  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:42.611517  181375 cri.go:89] found id: ""
	I1202 20:26:42.611539  181375 logs.go:282] 0 containers: []
	W1202 20:26:42.611547  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:42.611555  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:42.611566  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:42.681221  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:42.681251  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:42.695243  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:42.695272  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:42.761437  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:42.761503  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:42.761530  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:42.801759  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:42.801793  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:45.334667  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:45.352286  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:45.352360  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:45.386666  181375 cri.go:89] found id: ""
	I1202 20:26:45.386693  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.386702  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:45.386708  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:45.386771  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:45.413778  181375 cri.go:89] found id: ""
	I1202 20:26:45.413801  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.413809  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:45.413815  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:45.413871  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:45.437897  181375 cri.go:89] found id: ""
	I1202 20:26:45.437921  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.437930  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:45.437936  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:45.437994  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:45.464342  181375 cri.go:89] found id: ""
	I1202 20:26:45.464364  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.464372  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:45.464378  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:45.464436  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:45.494689  181375 cri.go:89] found id: ""
	I1202 20:26:45.494711  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.494719  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:45.494725  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:45.494782  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:45.519655  181375 cri.go:89] found id: ""
	I1202 20:26:45.519678  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.519687  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:45.519693  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:45.519755  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:45.550164  181375 cri.go:89] found id: ""
	I1202 20:26:45.550187  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.550196  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:45.550203  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:45.550288  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:45.578710  181375 cri.go:89] found id: ""
	I1202 20:26:45.578732  181375 logs.go:282] 0 containers: []
	W1202 20:26:45.578743  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:45.578752  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:45.579147  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:45.643775  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:45.643799  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:45.643812  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:45.683937  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:45.683971  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:45.712922  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:45.712956  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:45.786650  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:45.786685  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:48.302187  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:48.313056  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:48.313124  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:48.343822  181375 cri.go:89] found id: ""
	I1202 20:26:48.343843  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.343851  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:48.343858  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:48.343915  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:48.372425  181375 cri.go:89] found id: ""
	I1202 20:26:48.372448  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.372457  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:48.372469  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:48.372529  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:48.399577  181375 cri.go:89] found id: ""
	I1202 20:26:48.399606  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.399615  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:48.399621  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:48.399684  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:48.425785  181375 cri.go:89] found id: ""
	I1202 20:26:48.425806  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.425814  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:48.425820  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:48.425876  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:48.451863  181375 cri.go:89] found id: ""
	I1202 20:26:48.451889  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.451898  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:48.451904  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:48.451961  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:48.477308  181375 cri.go:89] found id: ""
	I1202 20:26:48.477329  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.477338  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:48.477344  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:48.477399  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:48.502856  181375 cri.go:89] found id: ""
	I1202 20:26:48.502877  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.502886  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:48.502892  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:48.502948  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:48.532846  181375 cri.go:89] found id: ""
	I1202 20:26:48.532867  181375 logs.go:282] 0 containers: []
	W1202 20:26:48.532875  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:48.532886  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:48.532898  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:48.600031  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:48.600069  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:48.615443  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:48.615474  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:48.682914  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:48.682931  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:48.682943  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:48.727772  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:48.727813  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:51.258356  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:51.268406  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:51.268473  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:51.298630  181375 cri.go:89] found id: ""
	I1202 20:26:51.298651  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.298660  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:51.298667  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:51.298727  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:51.334030  181375 cri.go:89] found id: ""
	I1202 20:26:51.334055  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.334064  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:51.334071  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:51.334129  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:51.362771  181375 cri.go:89] found id: ""
	I1202 20:26:51.362796  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.362805  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:51.362812  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:51.362869  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:51.388781  181375 cri.go:89] found id: ""
	I1202 20:26:51.388807  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.388817  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:51.388823  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:51.388886  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:51.413806  181375 cri.go:89] found id: ""
	I1202 20:26:51.413828  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.413836  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:51.413843  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:51.413901  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:51.442039  181375 cri.go:89] found id: ""
	I1202 20:26:51.442065  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.442073  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:51.442080  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:51.442144  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:51.467240  181375 cri.go:89] found id: ""
	I1202 20:26:51.467264  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.467273  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:51.467279  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:51.467335  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:51.492740  181375 cri.go:89] found id: ""
	I1202 20:26:51.492766  181375 logs.go:282] 0 containers: []
	W1202 20:26:51.492775  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:51.492784  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:51.492797  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:51.507065  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:51.507093  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:51.573725  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:51.573750  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:51.573766  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:51.613308  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:51.613340  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:51.640305  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:51.640333  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:54.211673  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:54.221984  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:54.222055  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:54.248430  181375 cri.go:89] found id: ""
	I1202 20:26:54.248453  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.248462  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:54.248468  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:54.248524  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:54.274436  181375 cri.go:89] found id: ""
	I1202 20:26:54.274461  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.274469  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:54.274476  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:54.274533  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:54.312103  181375 cri.go:89] found id: ""
	I1202 20:26:54.312128  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.312136  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:54.312143  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:54.312200  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:54.350878  181375 cri.go:89] found id: ""
	I1202 20:26:54.350900  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.350909  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:54.350915  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:54.350976  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:54.378549  181375 cri.go:89] found id: ""
	I1202 20:26:54.378578  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.378586  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:54.378592  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:54.378691  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:54.404869  181375 cri.go:89] found id: ""
	I1202 20:26:54.404893  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.404901  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:54.404914  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:54.404973  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:54.430414  181375 cri.go:89] found id: ""
	I1202 20:26:54.430438  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.430446  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:54.430453  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:54.430508  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:54.459264  181375 cri.go:89] found id: ""
	I1202 20:26:54.459288  181375 logs.go:282] 0 containers: []
	W1202 20:26:54.459298  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:54.459307  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:54.459319  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:26:54.487081  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:54.487106  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:54.557101  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:54.557135  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:54.571515  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:54.571543  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:54.638529  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:54.638551  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:54.638564  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:57.180147  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:26:57.190654  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:26:57.190734  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:26:57.215503  181375 cri.go:89] found id: ""
	I1202 20:26:57.215529  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.215538  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:26:57.215545  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:26:57.215605  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:26:57.240722  181375 cri.go:89] found id: ""
	I1202 20:26:57.240747  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.240756  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:26:57.240762  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:26:57.240819  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:26:57.265495  181375 cri.go:89] found id: ""
	I1202 20:26:57.265519  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.265528  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:26:57.265535  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:26:57.265590  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:26:57.308436  181375 cri.go:89] found id: ""
	I1202 20:26:57.308461  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.308475  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:26:57.308485  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:26:57.308551  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:26:57.342165  181375 cri.go:89] found id: ""
	I1202 20:26:57.342191  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.342199  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:26:57.342205  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:26:57.342288  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:26:57.374558  181375 cri.go:89] found id: ""
	I1202 20:26:57.374583  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.374592  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:26:57.374599  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:26:57.374655  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:26:57.400123  181375 cri.go:89] found id: ""
	I1202 20:26:57.400146  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.400155  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:26:57.400161  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:26:57.400219  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:26:57.425559  181375 cri.go:89] found id: ""
	I1202 20:26:57.425580  181375 logs.go:282] 0 containers: []
	W1202 20:26:57.425588  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:26:57.425596  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:26:57.425607  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:26:57.493682  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:26:57.493721  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:26:57.508551  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:26:57.508579  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:26:57.583533  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:26:57.583554  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:26:57.583567  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:26:57.627494  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:26:57.627538  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:00.162471  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:00.259449  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:00.259525  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:00.345447  181375 cri.go:89] found id: ""
	I1202 20:27:00.345472  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.345482  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:00.345489  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:00.345558  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:00.427438  181375 cri.go:89] found id: ""
	I1202 20:27:00.427468  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.427477  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:00.427483  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:00.427554  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:00.465238  181375 cri.go:89] found id: ""
	I1202 20:27:00.465267  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.465277  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:00.465284  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:00.465347  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:00.492999  181375 cri.go:89] found id: ""
	I1202 20:27:00.493025  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.493034  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:00.493040  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:00.493101  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:00.524796  181375 cri.go:89] found id: ""
	I1202 20:27:00.524822  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.524831  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:00.524838  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:00.524898  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:00.552643  181375 cri.go:89] found id: ""
	I1202 20:27:00.552665  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.552674  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:00.552681  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:00.552737  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:00.577472  181375 cri.go:89] found id: ""
	I1202 20:27:00.577498  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.577506  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:00.577513  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:00.577570  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:00.603627  181375 cri.go:89] found id: ""
	I1202 20:27:00.603649  181375 logs.go:282] 0 containers: []
	W1202 20:27:00.603658  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:00.603666  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:00.603677  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:00.672066  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:00.672083  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:00.672095  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:00.712275  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:00.712308  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:00.740849  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:00.740884  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:00.813890  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:00.813927  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:03.329772  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:03.343920  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:03.343991  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:03.408747  181375 cri.go:89] found id: ""
	I1202 20:27:03.408772  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.408780  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:03.408787  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:03.408851  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:03.452502  181375 cri.go:89] found id: ""
	I1202 20:27:03.452528  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.452541  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:03.452547  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:03.452609  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:03.487973  181375 cri.go:89] found id: ""
	I1202 20:27:03.488002  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.488011  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:03.488018  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:03.488086  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:03.520705  181375 cri.go:89] found id: ""
	I1202 20:27:03.520734  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.520743  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:03.520753  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:03.520817  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:03.552045  181375 cri.go:89] found id: ""
	I1202 20:27:03.552069  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.552078  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:03.552084  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:03.552139  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:03.587091  181375 cri.go:89] found id: ""
	I1202 20:27:03.587113  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.587122  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:03.587128  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:03.587193  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:03.613869  181375 cri.go:89] found id: ""
	I1202 20:27:03.613894  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.613902  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:03.613909  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:03.613967  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:03.643570  181375 cri.go:89] found id: ""
	I1202 20:27:03.643595  181375 logs.go:282] 0 containers: []
	W1202 20:27:03.643605  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:03.643613  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:03.643626  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:03.658005  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:03.658032  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:03.728882  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:03.728900  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:03.728914  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:03.769770  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:03.769802  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:03.798668  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:03.798693  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:06.371698  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:06.383338  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:06.383420  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:06.426609  181375 cri.go:89] found id: ""
	I1202 20:27:06.426636  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.426645  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:06.426652  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:06.426706  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:06.457239  181375 cri.go:89] found id: ""
	I1202 20:27:06.457266  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.457274  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:06.457280  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:06.457334  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:06.483811  181375 cri.go:89] found id: ""
	I1202 20:27:06.483832  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.483840  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:06.483846  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:06.483907  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:06.519197  181375 cri.go:89] found id: ""
	I1202 20:27:06.519218  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.519226  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:06.519232  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:06.519294  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:06.559713  181375 cri.go:89] found id: ""
	I1202 20:27:06.559736  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.559744  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:06.559750  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:06.559810  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:06.585000  181375 cri.go:89] found id: ""
	I1202 20:27:06.585020  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.585028  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:06.585035  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:06.585094  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:06.618936  181375 cri.go:89] found id: ""
	I1202 20:27:06.618956  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.618964  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:06.618971  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:06.619029  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:06.658165  181375 cri.go:89] found id: ""
	I1202 20:27:06.658185  181375 logs.go:282] 0 containers: []
	W1202 20:27:06.658193  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:06.658202  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:06.658213  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:06.708171  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:06.708198  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:06.798468  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:06.798499  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:06.816578  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:06.816737  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:06.916907  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:06.916924  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:06.916938  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:09.467933  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:09.478121  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:09.478187  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:09.505584  181375 cri.go:89] found id: ""
	I1202 20:27:09.505606  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.505615  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:09.505621  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:09.505697  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:09.536409  181375 cri.go:89] found id: ""
	I1202 20:27:09.536432  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.536441  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:09.536447  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:09.536504  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:09.576343  181375 cri.go:89] found id: ""
	I1202 20:27:09.576365  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.576373  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:09.576379  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:09.576434  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:09.605816  181375 cri.go:89] found id: ""
	I1202 20:27:09.605838  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.605846  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:09.605856  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:09.605922  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:09.638731  181375 cri.go:89] found id: ""
	I1202 20:27:09.638754  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.638763  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:09.638770  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:09.638829  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:09.675947  181375 cri.go:89] found id: ""
	I1202 20:27:09.675969  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.675977  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:09.675983  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:09.676039  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:09.708883  181375 cri.go:89] found id: ""
	I1202 20:27:09.708904  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.708912  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:09.708919  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:09.708977  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:09.747356  181375 cri.go:89] found id: ""
	I1202 20:27:09.747376  181375 logs.go:282] 0 containers: []
	W1202 20:27:09.747385  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:09.747393  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:09.747405  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:09.828576  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:09.828654  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:09.845052  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:09.845130  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:09.923833  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:09.923895  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:09.923921  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:09.968669  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:09.968781  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:12.524329  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:12.534468  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:12.534538  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:12.563453  181375 cri.go:89] found id: ""
	I1202 20:27:12.563475  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.563483  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:12.563489  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:12.563551  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:12.591965  181375 cri.go:89] found id: ""
	I1202 20:27:12.591990  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.591999  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:12.592006  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:12.592062  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:12.618224  181375 cri.go:89] found id: ""
	I1202 20:27:12.618259  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.618268  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:12.618274  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:12.618331  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:12.644102  181375 cri.go:89] found id: ""
	I1202 20:27:12.644124  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.644132  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:12.644138  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:12.644193  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:12.668279  181375 cri.go:89] found id: ""
	I1202 20:27:12.668300  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.668309  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:12.668315  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:12.668578  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:12.700046  181375 cri.go:89] found id: ""
	I1202 20:27:12.700080  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.700091  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:12.700099  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:12.700173  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:12.732985  181375 cri.go:89] found id: ""
	I1202 20:27:12.733022  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.733030  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:12.733036  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:12.733107  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:12.757752  181375 cri.go:89] found id: ""
	I1202 20:27:12.757825  181375 logs.go:282] 0 containers: []
	W1202 20:27:12.757840  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:12.757850  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:12.757864  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:12.823510  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:12.823545  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:12.837806  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:12.837833  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:12.902652  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:12.902712  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:12.902739  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:12.943463  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:12.943494  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:15.473477  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:15.484051  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:15.484117  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:15.510291  181375 cri.go:89] found id: ""
	I1202 20:27:15.510357  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.510380  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:15.510399  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:15.510466  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:15.536643  181375 cri.go:89] found id: ""
	I1202 20:27:15.536667  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.536677  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:15.536683  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:15.536744  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:15.565369  181375 cri.go:89] found id: ""
	I1202 20:27:15.565395  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.565404  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:15.565410  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:15.565482  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:15.591419  181375 cri.go:89] found id: ""
	I1202 20:27:15.591442  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.591449  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:15.591455  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:15.591514  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:15.618053  181375 cri.go:89] found id: ""
	I1202 20:27:15.618074  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.618083  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:15.618090  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:15.618174  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:15.643408  181375 cri.go:89] found id: ""
	I1202 20:27:15.643467  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.643490  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:15.643508  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:15.643582  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:15.668741  181375 cri.go:89] found id: ""
	I1202 20:27:15.668766  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.668774  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:15.668781  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:15.668837  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:15.693672  181375 cri.go:89] found id: ""
	I1202 20:27:15.693694  181375 logs.go:282] 0 containers: []
	W1202 20:27:15.693702  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:15.693710  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:15.693726  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:15.733883  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:15.733918  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:15.761327  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:15.761356  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:15.830819  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:15.830852  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:15.844850  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:15.844881  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:15.908290  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:18.409793  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:18.420902  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:18.420988  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:18.447264  181375 cri.go:89] found id: ""
	I1202 20:27:18.447286  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.447299  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:18.447306  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:18.447370  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:18.476150  181375 cri.go:89] found id: ""
	I1202 20:27:18.476173  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.476181  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:18.476188  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:18.476243  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:18.504288  181375 cri.go:89] found id: ""
	I1202 20:27:18.504310  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.504318  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:18.504325  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:18.504380  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:18.529733  181375 cri.go:89] found id: ""
	I1202 20:27:18.529773  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.529783  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:18.529809  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:18.529895  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:18.555918  181375 cri.go:89] found id: ""
	I1202 20:27:18.555940  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.555948  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:18.555954  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:18.556016  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:18.588252  181375 cri.go:89] found id: ""
	I1202 20:27:18.588280  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.588290  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:18.588297  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:18.588354  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:18.618394  181375 cri.go:89] found id: ""
	I1202 20:27:18.618421  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.618430  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:18.618436  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:18.618494  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:18.647640  181375 cri.go:89] found id: ""
	I1202 20:27:18.647675  181375 logs.go:282] 0 containers: []
	W1202 20:27:18.647686  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:18.647695  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:18.647707  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:18.689509  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:18.689542  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:18.718921  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:18.718945  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:18.786006  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:18.786041  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:18.801152  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:18.801183  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:18.878865  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:21.379851  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:21.390913  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:21.390982  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:21.418546  181375 cri.go:89] found id: ""
	I1202 20:27:21.418572  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.418581  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:21.418588  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:21.418643  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:21.443519  181375 cri.go:89] found id: ""
	I1202 20:27:21.443542  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.443551  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:21.443557  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:21.443617  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:21.468747  181375 cri.go:89] found id: ""
	I1202 20:27:21.468768  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.468777  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:21.468785  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:21.468841  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:21.494357  181375 cri.go:89] found id: ""
	I1202 20:27:21.494379  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.494387  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:21.494394  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:21.494450  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:21.520251  181375 cri.go:89] found id: ""
	I1202 20:27:21.520273  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.520281  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:21.520288  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:21.520347  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:21.546485  181375 cri.go:89] found id: ""
	I1202 20:27:21.546506  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.546514  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:21.546520  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:21.546575  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:21.571474  181375 cri.go:89] found id: ""
	I1202 20:27:21.571496  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.571504  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:21.571513  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:21.571569  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:21.597017  181375 cri.go:89] found id: ""
	I1202 20:27:21.597039  181375 logs.go:282] 0 containers: []
	W1202 20:27:21.597047  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:21.597055  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:21.597067  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:21.665798  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:21.665832  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:21.680130  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:21.680157  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:21.747626  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:21.747682  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:21.747708  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:21.790052  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:21.790083  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:24.319574  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:24.329609  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:24.329703  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:24.360619  181375 cri.go:89] found id: ""
	I1202 20:27:24.360643  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.360652  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:24.360666  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:24.360733  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:24.386626  181375 cri.go:89] found id: ""
	I1202 20:27:24.386649  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.386658  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:24.386664  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:24.386722  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:24.412006  181375 cri.go:89] found id: ""
	I1202 20:27:24.412031  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.412049  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:24.412056  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:24.412125  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:24.440001  181375 cri.go:89] found id: ""
	I1202 20:27:24.440040  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.440048  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:24.440055  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:24.440126  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:24.471145  181375 cri.go:89] found id: ""
	I1202 20:27:24.471217  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.471239  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:24.471260  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:24.471341  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:24.495902  181375 cri.go:89] found id: ""
	I1202 20:27:24.495924  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.495933  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:24.495939  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:24.496024  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:24.520809  181375 cri.go:89] found id: ""
	I1202 20:27:24.520836  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.520858  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:24.520865  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:24.520934  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:24.546359  181375 cri.go:89] found id: ""
	I1202 20:27:24.546382  181375 logs.go:282] 0 containers: []
	W1202 20:27:24.546390  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:24.546399  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:24.546412  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:24.587313  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:24.587347  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:24.617525  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:24.617599  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:24.691315  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:24.691350  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:24.705890  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:24.705918  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:24.772801  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:27.273055  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:27.283149  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:27.283218  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:27.327511  181375 cri.go:89] found id: ""
	I1202 20:27:27.327539  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.327548  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:27.327554  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:27.327613  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:27.360465  181375 cri.go:89] found id: ""
	I1202 20:27:27.360490  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.360499  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:27.360506  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:27.360561  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:27.388598  181375 cri.go:89] found id: ""
	I1202 20:27:27.388626  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.388634  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:27.388641  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:27.388698  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:27.415557  181375 cri.go:89] found id: ""
	I1202 20:27:27.415578  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.415586  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:27.415593  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:27.415650  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:27.444287  181375 cri.go:89] found id: ""
	I1202 20:27:27.444308  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.444318  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:27.444349  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:27.444433  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:27.469541  181375 cri.go:89] found id: ""
	I1202 20:27:27.469566  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.469583  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:27.469591  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:27.469650  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:27.494478  181375 cri.go:89] found id: ""
	I1202 20:27:27.494498  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.494507  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:27.494513  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:27.494569  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:27.520539  181375 cri.go:89] found id: ""
	I1202 20:27:27.520560  181375 logs.go:282] 0 containers: []
	W1202 20:27:27.520569  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:27.520578  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:27.520594  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:27.550006  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:27.550073  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:27.616015  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:27.616048  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:27.629808  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:27.629837  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:27.696870  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:27.696910  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:27.696923  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:30.241282  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:30.253942  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:30.254014  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:30.282605  181375 cri.go:89] found id: ""
	I1202 20:27:30.282627  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.282636  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:30.282643  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:30.282701  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:30.312487  181375 cri.go:89] found id: ""
	I1202 20:27:30.312509  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.312517  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:30.312523  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:30.312579  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:30.355500  181375 cri.go:89] found id: ""
	I1202 20:27:30.355522  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.355531  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:30.355537  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:30.355594  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:30.384658  181375 cri.go:89] found id: ""
	I1202 20:27:30.384680  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.384689  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:30.384694  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:30.384758  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:30.411237  181375 cri.go:89] found id: ""
	I1202 20:27:30.411310  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.411326  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:30.411334  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:30.411405  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:30.438578  181375 cri.go:89] found id: ""
	I1202 20:27:30.438600  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.438608  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:30.438615  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:30.438672  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:30.465144  181375 cri.go:89] found id: ""
	I1202 20:27:30.465172  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.465181  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:30.465187  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:30.465244  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:30.492998  181375 cri.go:89] found id: ""
	I1202 20:27:30.493023  181375 logs.go:282] 0 containers: []
	W1202 20:27:30.493032  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:30.493040  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:30.493056  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:30.559760  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:30.559793  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:30.574202  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:30.574252  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:30.642016  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:30.642034  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:30.642046  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:30.690205  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:30.690262  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:33.221144  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:33.232368  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:33.232457  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:33.259681  181375 cri.go:89] found id: ""
	I1202 20:27:33.259704  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.259713  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:33.259719  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:33.259778  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:33.288777  181375 cri.go:89] found id: ""
	I1202 20:27:33.288800  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.288808  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:33.288815  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:33.288881  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:33.352179  181375 cri.go:89] found id: ""
	I1202 20:27:33.352201  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.352210  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:33.352216  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:33.352275  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:33.386117  181375 cri.go:89] found id: ""
	I1202 20:27:33.386139  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.386148  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:33.386155  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:33.386211  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:33.416214  181375 cri.go:89] found id: ""
	I1202 20:27:33.416241  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.416250  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:33.416256  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:33.416315  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:33.442056  181375 cri.go:89] found id: ""
	I1202 20:27:33.442122  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.442137  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:33.442144  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:33.442208  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:33.470256  181375 cri.go:89] found id: ""
	I1202 20:27:33.470278  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.470293  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:33.470299  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:33.470366  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:33.496262  181375 cri.go:89] found id: ""
	I1202 20:27:33.496285  181375 logs.go:282] 0 containers: []
	W1202 20:27:33.496302  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:33.496312  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:33.496323  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:33.562764  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:33.562802  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:33.578396  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:33.578424  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:33.649791  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:33.649813  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:33.649827  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:33.693399  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:33.693434  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:36.225393  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:36.236096  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:36.236164  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:36.263679  181375 cri.go:89] found id: ""
	I1202 20:27:36.263701  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.263709  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:36.263716  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:36.263774  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:36.289316  181375 cri.go:89] found id: ""
	I1202 20:27:36.289338  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.289346  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:36.289352  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:36.289410  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:36.322773  181375 cri.go:89] found id: ""
	I1202 20:27:36.322798  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.322806  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:36.322812  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:36.322866  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:36.350870  181375 cri.go:89] found id: ""
	I1202 20:27:36.350894  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.350903  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:36.350910  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:36.350967  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:36.379544  181375 cri.go:89] found id: ""
	I1202 20:27:36.379566  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.379576  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:36.379581  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:36.379639  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:36.405513  181375 cri.go:89] found id: ""
	I1202 20:27:36.405535  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.405543  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:36.405550  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:36.405617  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:36.431154  181375 cri.go:89] found id: ""
	I1202 20:27:36.431225  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.431240  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:36.431248  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:36.431310  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:36.457316  181375 cri.go:89] found id: ""
	I1202 20:27:36.457339  181375 logs.go:282] 0 containers: []
	W1202 20:27:36.457348  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:36.457357  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:36.457370  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:36.472186  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:36.472212  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:36.540431  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:36.540453  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:36.540464  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:36.580946  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:36.580978  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:36.612833  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:36.612906  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:39.185624  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:39.196153  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:39.196228  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:39.221729  181375 cri.go:89] found id: ""
	I1202 20:27:39.221752  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.221762  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:39.221769  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:39.221839  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:39.250674  181375 cri.go:89] found id: ""
	I1202 20:27:39.250697  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.250705  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:39.250712  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:39.250772  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:39.276485  181375 cri.go:89] found id: ""
	I1202 20:27:39.276511  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.276519  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:39.276531  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:39.276590  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:39.311286  181375 cri.go:89] found id: ""
	I1202 20:27:39.311309  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.311319  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:39.311325  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:39.311383  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:39.345638  181375 cri.go:89] found id: ""
	I1202 20:27:39.345680  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.345690  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:39.345697  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:39.345758  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:39.372863  181375 cri.go:89] found id: ""
	I1202 20:27:39.372888  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.372896  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:39.372903  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:39.372959  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:39.401715  181375 cri.go:89] found id: ""
	I1202 20:27:39.401741  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.401760  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:39.401767  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:39.401828  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:39.429088  181375 cri.go:89] found id: ""
	I1202 20:27:39.429111  181375 logs.go:282] 0 containers: []
	W1202 20:27:39.429120  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:39.429128  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:39.429141  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:39.447499  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:39.447531  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:39.509995  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:39.510054  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:39.510080  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:39.555295  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:39.555337  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:39.583279  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:39.583305  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:42.158229  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:42.172937  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:42.173029  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:42.222542  181375 cri.go:89] found id: ""
	I1202 20:27:42.222576  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.222587  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:42.222594  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:42.222669  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:42.267153  181375 cri.go:89] found id: ""
	I1202 20:27:42.267181  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.267191  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:42.267198  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:42.267265  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:42.316118  181375 cri.go:89] found id: ""
	I1202 20:27:42.316149  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.316159  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:42.316165  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:42.316231  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:42.384810  181375 cri.go:89] found id: ""
	I1202 20:27:42.384843  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.384852  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:42.384859  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:42.384920  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:42.431582  181375 cri.go:89] found id: ""
	I1202 20:27:42.431608  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.431617  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:42.431624  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:42.431684  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:42.472486  181375 cri.go:89] found id: ""
	I1202 20:27:42.472527  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.472537  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:42.472544  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:42.472606  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:42.510775  181375 cri.go:89] found id: ""
	I1202 20:27:42.510800  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.510809  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:42.510816  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:42.510874  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:42.537182  181375 cri.go:89] found id: ""
	I1202 20:27:42.537205  181375 logs.go:282] 0 containers: []
	W1202 20:27:42.537213  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:42.537222  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:42.537233  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:42.611131  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:42.611163  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:42.625640  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:42.625744  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:42.688051  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:42.688072  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:42.688085  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:42.730808  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:42.730841  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:45.263124  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:45.298350  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:45.298460  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:45.352781  181375 cri.go:89] found id: ""
	I1202 20:27:45.352807  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.352816  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:45.352823  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:45.352882  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:45.395008  181375 cri.go:89] found id: ""
	I1202 20:27:45.395035  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.395044  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:45.395050  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:45.395136  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:45.444155  181375 cri.go:89] found id: ""
	I1202 20:27:45.444177  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.444186  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:45.444192  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:45.444248  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:45.476012  181375 cri.go:89] found id: ""
	I1202 20:27:45.476246  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.476308  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:45.476351  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:45.476523  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:45.517158  181375 cri.go:89] found id: ""
	I1202 20:27:45.517184  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.517193  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:45.517200  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:45.517277  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:45.549618  181375 cri.go:89] found id: ""
	I1202 20:27:45.549642  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.549671  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:45.549678  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:45.549761  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:45.589163  181375 cri.go:89] found id: ""
	I1202 20:27:45.589187  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.589196  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:45.589203  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:45.589281  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:45.637027  181375 cri.go:89] found id: ""
	I1202 20:27:45.637051  181375 logs.go:282] 0 containers: []
	W1202 20:27:45.637059  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:45.637067  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:45.637077  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:45.687827  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:45.687857  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:45.734787  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:45.734813  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:45.811785  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:45.811855  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:45.837224  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:45.837250  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:45.918641  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:48.420279  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:48.430421  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:48.430493  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:48.455873  181375 cri.go:89] found id: ""
	I1202 20:27:48.455895  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.455904  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:48.455911  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:48.455967  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:48.491156  181375 cri.go:89] found id: ""
	I1202 20:27:48.491177  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.491186  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:48.491192  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:48.491247  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:48.516248  181375 cri.go:89] found id: ""
	I1202 20:27:48.516269  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.516277  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:48.516284  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:48.516344  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:48.546423  181375 cri.go:89] found id: ""
	I1202 20:27:48.546458  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.546466  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:48.546472  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:48.546534  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:48.571999  181375 cri.go:89] found id: ""
	I1202 20:27:48.572021  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.572029  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:48.572045  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:48.572102  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:48.600191  181375 cri.go:89] found id: ""
	I1202 20:27:48.600216  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.600224  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:48.600231  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:48.600286  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:48.635299  181375 cri.go:89] found id: ""
	I1202 20:27:48.635323  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.635332  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:48.635338  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:48.635395  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:48.666535  181375 cri.go:89] found id: ""
	I1202 20:27:48.666560  181375 logs.go:282] 0 containers: []
	W1202 20:27:48.666570  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:48.666582  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:48.666593  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:48.746090  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:48.746231  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:48.762719  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:48.762841  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:48.854734  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:48.854801  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:48.854829  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:48.898828  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:48.898901  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:51.440302  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:51.450436  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:51.450506  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:51.477289  181375 cri.go:89] found id: ""
	I1202 20:27:51.477312  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.477320  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:51.477326  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:51.477382  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:51.502642  181375 cri.go:89] found id: ""
	I1202 20:27:51.502672  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.502680  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:51.502687  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:51.502746  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:51.531729  181375 cri.go:89] found id: ""
	I1202 20:27:51.531753  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.531762  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:51.531768  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:51.531825  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:51.557500  181375 cri.go:89] found id: ""
	I1202 20:27:51.557524  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.557533  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:51.557539  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:51.557595  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:51.588167  181375 cri.go:89] found id: ""
	I1202 20:27:51.588191  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.588200  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:51.588206  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:51.588261  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:51.615156  181375 cri.go:89] found id: ""
	I1202 20:27:51.615181  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.615190  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:51.615196  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:51.615256  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:51.641149  181375 cri.go:89] found id: ""
	I1202 20:27:51.641170  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.641179  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:51.641185  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:51.641240  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:51.669049  181375 cri.go:89] found id: ""
	I1202 20:27:51.669070  181375 logs.go:282] 0 containers: []
	W1202 20:27:51.669079  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:51.669087  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:51.669098  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:51.717049  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:51.717092  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:51.749556  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:51.749629  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:51.822698  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:51.822735  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:51.837713  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:51.837785  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:51.903970  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:54.404203  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:54.417645  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:54.417734  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:54.444301  181375 cri.go:89] found id: ""
	I1202 20:27:54.444324  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.444332  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:54.444338  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:54.444396  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:54.470854  181375 cri.go:89] found id: ""
	I1202 20:27:54.470881  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.470889  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:54.470896  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:54.470951  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:54.497194  181375 cri.go:89] found id: ""
	I1202 20:27:54.497218  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.497227  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:54.497233  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:54.497290  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:54.523087  181375 cri.go:89] found id: ""
	I1202 20:27:54.523114  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.523123  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:54.523130  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:54.523185  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:54.553275  181375 cri.go:89] found id: ""
	I1202 20:27:54.553297  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.553305  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:54.553312  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:54.553370  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:54.578899  181375 cri.go:89] found id: ""
	I1202 20:27:54.578923  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.578932  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:54.578938  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:54.578995  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:54.603691  181375 cri.go:89] found id: ""
	I1202 20:27:54.603713  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.603730  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:54.603737  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:54.603792  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:54.629062  181375 cri.go:89] found id: ""
	I1202 20:27:54.629099  181375 logs.go:282] 0 containers: []
	W1202 20:27:54.629112  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:54.629121  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:54.629132  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:54.698623  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:54.698656  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:54.713132  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:54.713160  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:54.780524  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:54.780544  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:54.780557  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:54.821642  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:54.821692  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:27:57.350230  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:57.360527  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:27:57.360592  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:27:57.389555  181375 cri.go:89] found id: ""
	I1202 20:27:57.389581  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.389589  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:27:57.389596  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:27:57.389676  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:27:57.415199  181375 cri.go:89] found id: ""
	I1202 20:27:57.415224  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.415232  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:27:57.415239  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:27:57.415295  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:27:57.440624  181375 cri.go:89] found id: ""
	I1202 20:27:57.440645  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.440654  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:27:57.440660  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:27:57.440713  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:27:57.469487  181375 cri.go:89] found id: ""
	I1202 20:27:57.469512  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.469520  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:27:57.469527  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:27:57.469584  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:27:57.494517  181375 cri.go:89] found id: ""
	I1202 20:27:57.494538  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.494547  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:27:57.494553  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:27:57.494608  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:27:57.518954  181375 cri.go:89] found id: ""
	I1202 20:27:57.518978  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.518987  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:27:57.518995  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:27:57.519049  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:27:57.546862  181375 cri.go:89] found id: ""
	I1202 20:27:57.546885  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.546895  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:27:57.546905  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:27:57.546963  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:27:57.572012  181375 cri.go:89] found id: ""
	I1202 20:27:57.572036  181375 logs.go:282] 0 containers: []
	W1202 20:27:57.572045  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:27:57.572054  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:27:57.572066  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:27:57.641223  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:27:57.641258  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:27:57.655894  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:27:57.655929  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:27:57.719976  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:27:57.719995  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:27:57.720009  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:27:57.761144  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:27:57.761175  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:00.297071  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:00.315225  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:00.315304  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:00.380918  181375 cri.go:89] found id: ""
	I1202 20:28:00.380945  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.380955  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:00.380963  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:00.381049  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:00.438448  181375 cri.go:89] found id: ""
	I1202 20:28:00.438472  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.438481  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:00.438487  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:00.438551  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:00.470287  181375 cri.go:89] found id: ""
	I1202 20:28:00.470315  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.470324  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:00.470332  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:00.470400  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:00.503657  181375 cri.go:89] found id: ""
	I1202 20:28:00.503692  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.503701  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:00.503710  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:00.503782  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:00.532927  181375 cri.go:89] found id: ""
	I1202 20:28:00.532950  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.532959  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:00.532966  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:00.533033  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:00.560900  181375 cri.go:89] found id: ""
	I1202 20:28:00.560922  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.560930  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:00.560937  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:00.560997  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:00.587237  181375 cri.go:89] found id: ""
	I1202 20:28:00.587260  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.587268  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:00.587274  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:00.587338  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:00.613497  181375 cri.go:89] found id: ""
	I1202 20:28:00.613519  181375 logs.go:282] 0 containers: []
	W1202 20:28:00.613536  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:00.613545  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:00.613557  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:00.683875  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:00.683894  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:00.683906  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:00.726643  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:00.726678  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:00.763289  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:00.763319  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:00.834449  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:00.834487  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:03.350606  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:03.361093  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:03.361163  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:03.388708  181375 cri.go:89] found id: ""
	I1202 20:28:03.388733  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.388742  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:03.388748  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:03.388847  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:03.416734  181375 cri.go:89] found id: ""
	I1202 20:28:03.416759  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.416767  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:03.416774  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:03.416832  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:03.444504  181375 cri.go:89] found id: ""
	I1202 20:28:03.444524  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.444532  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:03.444538  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:03.444597  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:03.472197  181375 cri.go:89] found id: ""
	I1202 20:28:03.472220  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.472229  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:03.472237  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:03.472299  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:03.499685  181375 cri.go:89] found id: ""
	I1202 20:28:03.499711  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.499730  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:03.499738  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:03.499795  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:03.526307  181375 cri.go:89] found id: ""
	I1202 20:28:03.526332  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.526340  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:03.526347  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:03.526403  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:03.551665  181375 cri.go:89] found id: ""
	I1202 20:28:03.551688  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.551697  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:03.551703  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:03.551763  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:03.580646  181375 cri.go:89] found id: ""
	I1202 20:28:03.580719  181375 logs.go:282] 0 containers: []
	W1202 20:28:03.580743  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:03.580767  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:03.580794  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:03.595123  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:03.595155  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:03.663036  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:03.663064  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:03.663079  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:03.704193  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:03.704229  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:03.733522  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:03.733550  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:06.305762  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:06.315929  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:06.316005  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:06.342120  181375 cri.go:89] found id: ""
	I1202 20:28:06.342145  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.342154  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:06.342166  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:06.342231  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:06.367599  181375 cri.go:89] found id: ""
	I1202 20:28:06.367623  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.367632  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:06.367638  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:06.367693  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:06.393575  181375 cri.go:89] found id: ""
	I1202 20:28:06.393598  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.393607  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:06.393613  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:06.393692  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:06.419470  181375 cri.go:89] found id: ""
	I1202 20:28:06.419492  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.419500  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:06.419512  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:06.419566  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:06.449935  181375 cri.go:89] found id: ""
	I1202 20:28:06.449959  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.449967  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:06.449974  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:06.450032  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:06.479234  181375 cri.go:89] found id: ""
	I1202 20:28:06.479305  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.479320  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:06.479328  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:06.479384  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:06.503244  181375 cri.go:89] found id: ""
	I1202 20:28:06.503268  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.503277  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:06.503283  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:06.503344  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:06.529139  181375 cri.go:89] found id: ""
	I1202 20:28:06.529162  181375 logs.go:282] 0 containers: []
	W1202 20:28:06.529171  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:06.529180  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:06.529191  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:06.557505  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:06.557532  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:06.627146  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:06.627181  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:06.641345  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:06.641374  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:06.714162  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:06.714183  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:06.714196  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:09.257404  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:09.267522  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:09.267586  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:09.296233  181375 cri.go:89] found id: ""
	I1202 20:28:09.296252  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.296261  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:09.296267  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:09.296321  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:09.325602  181375 cri.go:89] found id: ""
	I1202 20:28:09.325622  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.325631  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:09.325637  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:09.325702  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:09.358423  181375 cri.go:89] found id: ""
	I1202 20:28:09.358451  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.358460  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:09.358466  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:09.358523  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:09.388661  181375 cri.go:89] found id: ""
	I1202 20:28:09.388685  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.388694  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:09.388716  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:09.388774  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:09.417139  181375 cri.go:89] found id: ""
	I1202 20:28:09.417160  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.417168  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:09.417175  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:09.417253  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:09.442216  181375 cri.go:89] found id: ""
	I1202 20:28:09.442239  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.442247  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:09.442254  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:09.442310  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:09.466502  181375 cri.go:89] found id: ""
	I1202 20:28:09.466524  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.466532  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:09.466539  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:09.466602  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:09.490520  181375 cri.go:89] found id: ""
	I1202 20:28:09.490541  181375 logs.go:282] 0 containers: []
	W1202 20:28:09.490561  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:09.490570  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:09.490581  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:09.519248  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:09.519322  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:09.589601  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:09.589635  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:09.603934  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:09.603962  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:09.672739  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:09.672761  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:09.672779  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:12.213785  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:12.226129  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:12.226196  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:12.260628  181375 cri.go:89] found id: ""
	I1202 20:28:12.260649  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.260657  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:12.260664  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:12.260718  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:12.296303  181375 cri.go:89] found id: ""
	I1202 20:28:12.296323  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.296332  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:12.296339  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:12.296394  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:12.363329  181375 cri.go:89] found id: ""
	I1202 20:28:12.363350  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.363358  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:12.363365  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:12.363418  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:12.419694  181375 cri.go:89] found id: ""
	I1202 20:28:12.419716  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.419724  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:12.419731  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:12.419785  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:12.449095  181375 cri.go:89] found id: ""
	I1202 20:28:12.449115  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.449123  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:12.449130  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:12.449184  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:12.483901  181375 cri.go:89] found id: ""
	I1202 20:28:12.483961  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.483991  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:12.484010  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:12.484090  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:12.527600  181375 cri.go:89] found id: ""
	I1202 20:28:12.527675  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.527698  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:12.527718  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:12.527834  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:12.557487  181375 cri.go:89] found id: ""
	I1202 20:28:12.557564  181375 logs.go:282] 0 containers: []
	W1202 20:28:12.557586  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:12.557610  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:12.557649  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:12.631900  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:12.632163  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:12.646616  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:12.646645  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:12.710331  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:12.710353  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:12.710366  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:12.752954  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:12.752991  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:15.281763  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:15.300843  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:15.300914  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:15.367958  181375 cri.go:89] found id: ""
	I1202 20:28:15.367986  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.367995  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:15.368001  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:15.368056  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:15.401640  181375 cri.go:89] found id: ""
	I1202 20:28:15.401680  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.401689  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:15.401695  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:15.401751  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:15.436328  181375 cri.go:89] found id: ""
	I1202 20:28:15.436355  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.436364  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:15.436370  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:15.436428  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:15.465329  181375 cri.go:89] found id: ""
	I1202 20:28:15.465355  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.465363  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:15.465369  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:15.465424  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:15.498239  181375 cri.go:89] found id: ""
	I1202 20:28:15.498260  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.498269  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:15.498275  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:15.498331  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:15.541972  181375 cri.go:89] found id: ""
	I1202 20:28:15.541998  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.542007  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:15.542013  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:15.542070  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:15.570662  181375 cri.go:89] found id: ""
	I1202 20:28:15.570687  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.570696  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:15.570702  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:15.570756  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:15.611600  181375 cri.go:89] found id: ""
	I1202 20:28:15.611625  181375 logs.go:282] 0 containers: []
	W1202 20:28:15.611634  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:15.611643  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:15.611656  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:15.627526  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:15.627555  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:15.719558  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:15.719581  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:15.719594  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:15.770701  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:15.770737  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:15.816655  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:15.816684  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:18.397791  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:18.407900  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:28:18.407967  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:28:18.434536  181375 cri.go:89] found id: ""
	I1202 20:28:18.434558  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.434567  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:28:18.434573  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:28:18.434630  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:28:18.459611  181375 cri.go:89] found id: ""
	I1202 20:28:18.459634  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.459642  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:28:18.459648  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:28:18.459706  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:28:18.489376  181375 cri.go:89] found id: ""
	I1202 20:28:18.489397  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.489405  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:28:18.489411  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:28:18.489468  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:28:18.514891  181375 cri.go:89] found id: ""
	I1202 20:28:18.514913  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.514923  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:28:18.514930  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:28:18.514989  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:28:18.542907  181375 cri.go:89] found id: ""
	I1202 20:28:18.542929  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.542937  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:28:18.542943  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:28:18.542999  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:28:18.573728  181375 cri.go:89] found id: ""
	I1202 20:28:18.573756  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.573770  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:28:18.573778  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:28:18.573840  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:28:18.600665  181375 cri.go:89] found id: ""
	I1202 20:28:18.600691  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.600700  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:28:18.600707  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:28:18.600765  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:28:18.626368  181375 cri.go:89] found id: ""
	I1202 20:28:18.626391  181375 logs.go:282] 0 containers: []
	W1202 20:28:18.626400  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:28:18.626408  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:28:18.626419  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:28:18.695920  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:28:18.695956  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:28:18.710368  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:28:18.710400  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:28:18.776732  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:28:18.776754  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:28:18.776768  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:28:18.817658  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:28:18.817699  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:28:21.348584  181375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:28:21.358192  181375 kubeadm.go:602] duration metric: took 4m2.512925252s to restartPrimaryControlPlane
	W1202 20:28:21.358258  181375 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 20:28:21.358317  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 20:28:21.764146  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:28:21.777220  181375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:28:21.785567  181375 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:28:21.785634  181375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:28:21.793415  181375 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:28:21.793433  181375 kubeadm.go:158] found existing configuration files:
	
	I1202 20:28:21.793506  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:28:21.801200  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:28:21.801266  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:28:21.809057  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:28:21.816878  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:28:21.816945  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:28:21.824464  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:28:21.832668  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:28:21.832737  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:28:21.840196  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:28:21.847608  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:28:21.847678  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:28:21.855387  181375 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:28:21.892544  181375 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:28:21.892827  181375 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:28:21.967929  181375 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:28:21.968003  181375 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 20:28:21.968049  181375 kubeadm.go:319] OS: Linux
	I1202 20:28:21.968098  181375 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:28:21.968149  181375 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 20:28:21.968200  181375 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:28:21.968256  181375 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:28:21.968308  181375 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:28:21.968363  181375 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:28:21.968414  181375 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:28:21.968465  181375 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:28:21.968515  181375 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 20:28:22.043061  181375 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:28:22.043182  181375 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:28:22.043278  181375 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:28:22.066266  181375 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:28:22.072877  181375 out.go:252]   - Generating certificates and keys ...
	I1202 20:28:22.072988  181375 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:28:22.073061  181375 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:28:22.073142  181375 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 20:28:22.073208  181375 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 20:28:22.073282  181375 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 20:28:22.073340  181375 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 20:28:22.073407  181375 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 20:28:22.073473  181375 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 20:28:22.073552  181375 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 20:28:22.073629  181375 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 20:28:22.073727  181375 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 20:28:22.073789  181375 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:28:22.239158  181375 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:28:22.459883  181375 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:28:22.570593  181375 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:28:23.120447  181375 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:28:23.300255  181375 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:28:23.301283  181375 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:28:23.304216  181375 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:28:23.308257  181375 out.go:252]   - Booting up control plane ...
	I1202 20:28:23.308359  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:28:23.308442  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:28:23.310270  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:28:23.328010  181375 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:28:23.328165  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:28:23.336153  181375 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:28:23.343823  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:28:23.343908  181375 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:28:23.503110  181375 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:28:23.503288  181375 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:32:23.503840  181375 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00103374s
	I1202 20:32:23.503874  181375 kubeadm.go:319] 
	I1202 20:32:23.503956  181375 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 20:32:23.504007  181375 kubeadm.go:319] 	- The kubelet is not running
	I1202 20:32:23.504125  181375 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 20:32:23.504133  181375 kubeadm.go:319] 
	I1202 20:32:23.504237  181375 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 20:32:23.504278  181375 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 20:32:23.504310  181375 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 20:32:23.504314  181375 kubeadm.go:319] 
	I1202 20:32:23.508552  181375 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:32:23.509049  181375 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 20:32:23.509182  181375 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:32:23.509456  181375 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 20:32:23.509466  181375 kubeadm.go:319] 
	I1202 20:32:23.509543  181375 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 20:32:23.509696  181375 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00103374s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00103374s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 20:32:23.509800  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 20:32:23.926878  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:23.941936  181375 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:32:23.941999  181375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:32:23.956231  181375 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:32:23.956251  181375 kubeadm.go:158] found existing configuration files:
	
	I1202 20:32:23.956312  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:32:23.965353  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:32:23.965424  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:32:23.974074  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:32:23.983217  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:32:23.983330  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:32:23.991422  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.000663  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:32:24.000765  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.011295  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:32:24.021043  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:32:24.021186  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:32:24.035281  181375 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:32:24.091828  181375 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:32:24.092279  181375 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:32:24.177998  181375 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:32:24.178147  181375 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 20:32:24.178212  181375 kubeadm.go:319] OS: Linux
	I1202 20:32:24.178291  181375 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:32:24.178370  181375 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 20:32:24.178450  181375 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:32:24.178518  181375 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:32:24.178610  181375 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:32:24.178682  181375 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:32:24.178775  181375 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:32:24.178854  181375 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:32:24.178913  181375 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 20:32:24.251731  181375 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:32:24.251848  181375 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:32:24.251946  181375 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:32:24.267154  181375 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:32:24.270532  181375 out.go:252]   - Generating certificates and keys ...
	I1202 20:32:24.270644  181375 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:32:24.270773  181375 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:32:24.270888  181375 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 20:32:24.270971  181375 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 20:32:24.271058  181375 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 20:32:24.271152  181375 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 20:32:24.271232  181375 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 20:32:24.271315  181375 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 20:32:24.271403  181375 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 20:32:24.271485  181375 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 20:32:24.271560  181375 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 20:32:24.271633  181375 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:32:24.364613  181375 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:32:24.725639  181375 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:32:24.885871  181375 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:32:25.393339  181375 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:32:25.623587  181375 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:32:25.624397  181375 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:32:25.627107  181375 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:32:25.630436  181375 out.go:252]   - Booting up control plane ...
	I1202 20:32:25.630545  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:32:25.630624  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:32:25.631009  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:32:25.647838  181375 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:32:25.648215  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:32:25.656245  181375 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:32:25.656744  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:32:25.656812  181375 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:32:25.794935  181375 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:32:25.795058  181375 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:36:25.795730  181375 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001173339s
	I1202 20:36:25.795770  181375 kubeadm.go:319] 
	I1202 20:36:25.795829  181375 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 20:36:25.795864  181375 kubeadm.go:319] 	- The kubelet is not running
	I1202 20:36:25.795968  181375 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 20:36:25.795974  181375 kubeadm.go:319] 
	I1202 20:36:25.796076  181375 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 20:36:25.796107  181375 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 20:36:25.796140  181375 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 20:36:25.796145  181375 kubeadm.go:319] 
	I1202 20:36:25.799726  181375 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:36:25.800158  181375 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 20:36:25.800274  181375 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:36:25.800520  181375 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 20:36:25.800530  181375 kubeadm.go:319] 
	I1202 20:36:25.800600  181375 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 20:36:25.800655  181375 kubeadm.go:403] duration metric: took 12m7.006146482s to StartCluster
	I1202 20:36:25.800690  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:36:25.800752  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:36:25.827429  181375 cri.go:89] found id: ""
	I1202 20:36:25.827451  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.827459  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:36:25.827465  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:36:25.827527  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:36:25.854803  181375 cri.go:89] found id: ""
	I1202 20:36:25.854827  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.854835  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:36:25.854842  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:36:25.854910  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:36:25.879772  181375 cri.go:89] found id: ""
	I1202 20:36:25.879797  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.879806  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:36:25.879813  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:36:25.879867  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:36:25.904946  181375 cri.go:89] found id: ""
	I1202 20:36:25.904967  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.904975  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:36:25.904982  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:36:25.905047  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:36:25.930544  181375 cri.go:89] found id: ""
	I1202 20:36:25.930567  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.930576  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:36:25.930582  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:36:25.930636  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:36:25.959586  181375 cri.go:89] found id: ""
	I1202 20:36:25.959608  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.959617  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:36:25.959623  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:36:25.959679  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:36:25.984698  181375 cri.go:89] found id: ""
	I1202 20:36:25.984721  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.984729  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:36:25.984735  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:36:25.984789  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:36:26.013174  181375 cri.go:89] found id: ""
	I1202 20:36:26.013199  181375 logs.go:282] 0 containers: []
	W1202 20:36:26.013208  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:36:26.013218  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:36:26.013229  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:36:26.091680  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:36:26.091722  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:36:26.107966  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:36:26.107996  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:36:26.175717  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:36:26.175770  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:36:26.175810  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:36:26.217078  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:36:26.217112  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 20:36:26.245756  181375 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 20:36:26.245810  181375 out.go:285] * 
	* 
	W1202 20:36:26.245878  181375 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 20:36:26.245907  181375 out.go:285] * 
	* 
	W1202 20:36:26.248332  181375 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:36:26.255135  181375 out.go:203] 
	W1202 20:36:26.258046  181375 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 20:36:26.258087  181375 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 20:36:26.258109  181375 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 20:36:26.261088  181375 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-080046 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-080046 version --output=json: exit status 1 (80.707352ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-02 20:36:26.763084516 +0000 UTC m=+6506.597385281
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-080046
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-080046:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354",
	        "Created": "2025-12-02T20:23:23.768943295Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:23:54.082421271Z",
	            "FinishedAt": "2025-12-02T20:23:52.787049419Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354/hostname",
	        "HostsPath": "/var/lib/docker/containers/4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354/hosts",
	        "LogPath": "/var/lib/docker/containers/4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354/4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354-json.log",
	        "Name": "/kubernetes-upgrade-080046",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-080046:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-080046",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4642f91a96ef97b00614884c9ead16dd79af85e54f040e972747b4ed0c63c354",
	                "LowerDir": "/var/lib/docker/overlay2/48b4ee4b94bba79794cae3c8c99aa9b1882eacf6e3d80b9f10a5cbc9c23d7610-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48b4ee4b94bba79794cae3c8c99aa9b1882eacf6e3d80b9f10a5cbc9c23d7610/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48b4ee4b94bba79794cae3c8c99aa9b1882eacf6e3d80b9f10a5cbc9c23d7610/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48b4ee4b94bba79794cae3c8c99aa9b1882eacf6e3d80b9f10a5cbc9c23d7610/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-080046",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-080046/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-080046",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-080046",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-080046",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b3226c219fbb7fccc50d0b54925fda57efe5370e6150806fa99e38f088369849",
	            "SandboxKey": "/var/run/docker/netns/b3226c219fbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33002"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33000"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33001"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-080046": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:b1:e5:36:9e:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1acb390b6be116624ca2395101c4ab31f3889aa0a1f381faabf94f81bbea410c",
	                    "EndpointID": "52e5ccee119bb44fb59523816ce360a7d4b5e898bdeeb4a50387a09bda5370e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-080046",
	                        "4642f91a96ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-080046 -n kubernetes-upgrade-080046
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-080046 -n kubernetes-upgrade-080046: exit status 2 (350.017438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-080046 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-334566 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl status docker --all --full --no-pager                                      │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl cat docker --no-pager                                                      │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /etc/docker/daemon.json                                                          │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo docker system info                                                                   │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cri-dockerd --version                                                                │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl cat containerd --no-pager                                                  │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo cat /etc/containerd/config.toml                                                      │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo containerd config dump                                                               │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl status crio --all --full --no-pager                                        │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo systemctl cat crio --no-pager                                                        │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ ssh     │ -p cilium-334566 sudo crio config                                                                          │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │                     │
	│ delete  │ -p cilium-334566                                                                                           │ cilium-334566            │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │ 02 Dec 25 20:33 UTC │
	│ start   │ -p force-systemd-env-639740 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-639740 │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │ 02 Dec 25 20:33 UTC │
	│ delete  │ -p force-systemd-env-639740                                                                                │ force-systemd-env-639740 │ jenkins │ v1.37.0 │ 02 Dec 25 20:33 UTC │ 02 Dec 25 20:34 UTC │
	│ start   │ -p cert-expiration-182891 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-182891   │ jenkins │ v1.37.0 │ 02 Dec 25 20:34 UTC │ 02 Dec 25 20:34 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:34:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:34:02.779838  220500 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:34:02.779945  220500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:34:02.779948  220500 out.go:374] Setting ErrFile to fd 2...
	I1202 20:34:02.779952  220500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:34:02.780277  220500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:34:02.780710  220500 out.go:368] Setting JSON to false
	I1202 20:34:02.781530  220500 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8181,"bootTime":1764699462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:34:02.781604  220500 start.go:143] virtualization:  
	I1202 20:34:02.785367  220500 out.go:179] * [cert-expiration-182891] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:34:02.789693  220500 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 20:34:02.791162  220500 notify.go:221] Checking for updates...
	I1202 20:34:02.797076  220500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:34:02.800123  220500 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:34:02.802920  220500 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 20:34:02.805974  220500 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 20:34:02.808674  220500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:34:02.811916  220500 config.go:182] Loaded profile config "kubernetes-upgrade-080046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:34:02.812008  220500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:34:02.848987  220500 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:34:02.849112  220500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:34:02.904894  220500 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 20:34:02.895433365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:34:02.904982  220500 docker.go:319] overlay module found
	I1202 20:34:02.908024  220500 out.go:179] * Using the docker driver based on user configuration
	I1202 20:34:02.910751  220500 start.go:309] selected driver: docker
	I1202 20:34:02.910760  220500 start.go:927] validating driver "docker" against <nil>
	I1202 20:34:02.910770  220500 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:34:02.911488  220500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:34:02.965727  220500 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 20:34:02.956508992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:34:02.965857  220500 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:34:02.966057  220500 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 20:34:02.968817  220500 out.go:179] * Using Docker driver with root privileges
	I1202 20:34:02.971499  220500 cni.go:84] Creating CNI manager for ""
	I1202 20:34:02.971553  220500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:34:02.971560  220500 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:34:02.971628  220500 start.go:353] cluster config:
	{Name:cert-expiration-182891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-182891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:34:02.976614  220500 out.go:179] * Starting "cert-expiration-182891" primary control-plane node in "cert-expiration-182891" cluster
	I1202 20:34:02.979306  220500 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:34:02.982189  220500 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:34:02.985092  220500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:34:02.985131  220500 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 20:34:02.985138  220500 cache.go:65] Caching tarball of preloaded images
	I1202 20:34:02.985166  220500 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:34:02.985214  220500 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 20:34:02.985222  220500 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:34:02.985322  220500 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/config.json ...
	I1202 20:34:02.985338  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/config.json: {Name:mk4f320739ef1744d9da42a3416f1bb51d3f80c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:03.004771  220500 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:34:03.004788  220500 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:34:03.004804  220500 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:34:03.004841  220500 start.go:360] acquireMachinesLock for cert-expiration-182891: {Name:mk3b95cffb8e307dd79a357564d6206085d54c63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:34:03.005000  220500 start.go:364] duration metric: took 143.372µs to acquireMachinesLock for "cert-expiration-182891"
	I1202 20:34:03.005031  220500 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-182891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-182891 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:34:03.005118  220500 start.go:125] createHost starting for "" (driver="docker")
	I1202 20:34:03.009390  220500 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 20:34:03.009638  220500 start.go:159] libmachine.API.Create for "cert-expiration-182891" (driver="docker")
	I1202 20:34:03.009706  220500 client.go:173] LocalClient.Create starting
	I1202 20:34:03.009843  220500 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem
	I1202 20:34:03.009881  220500 main.go:143] libmachine: Decoding PEM data...
	I1202 20:34:03.009895  220500 main.go:143] libmachine: Parsing certificate...
	I1202 20:34:03.009944  220500 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem
	I1202 20:34:03.009961  220500 main.go:143] libmachine: Decoding PEM data...
	I1202 20:34:03.009976  220500 main.go:143] libmachine: Parsing certificate...
	I1202 20:34:03.010385  220500 cli_runner.go:164] Run: docker network inspect cert-expiration-182891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 20:34:03.027528  220500 cli_runner.go:211] docker network inspect cert-expiration-182891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 20:34:03.027637  220500 network_create.go:284] running [docker network inspect cert-expiration-182891] to gather additional debugging logs...
	I1202 20:34:03.027653  220500 cli_runner.go:164] Run: docker network inspect cert-expiration-182891
	W1202 20:34:03.043602  220500 cli_runner.go:211] docker network inspect cert-expiration-182891 returned with exit code 1
	I1202 20:34:03.043630  220500 network_create.go:287] error running [docker network inspect cert-expiration-182891]: docker network inspect cert-expiration-182891: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-182891 not found
	I1202 20:34:03.043641  220500 network_create.go:289] output of [docker network inspect cert-expiration-182891]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-182891 not found
	
	** /stderr **
	I1202 20:34:03.043756  220500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:34:03.061377  220500 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-56dad1208e3b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:3e:f9:4b:bf:54} reservation:<nil>}
	I1202 20:34:03.061744  220500 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-3915b3fb98c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:46:0f:17:01:67:2a} reservation:<nil>}
	I1202 20:34:03.062143  220500 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-02f7697fee92 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ee:7b:29:0c:81:b5} reservation:<nil>}
	I1202 20:34:03.062514  220500 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1acb390b6be1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:dd:c8:8a:75:f3} reservation:<nil>}
	I1202 20:34:03.062972  220500 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1ae0}
	I1202 20:34:03.062991  220500 network_create.go:124] attempt to create docker network cert-expiration-182891 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 20:34:03.063052  220500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-182891 cert-expiration-182891
	I1202 20:34:03.128728  220500 network_create.go:108] docker network cert-expiration-182891 192.168.85.0/24 created
	I1202 20:34:03.128750  220500 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-182891" container
	I1202 20:34:03.128831  220500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 20:34:03.147886  220500 cli_runner.go:164] Run: docker volume create cert-expiration-182891 --label name.minikube.sigs.k8s.io=cert-expiration-182891 --label created_by.minikube.sigs.k8s.io=true
	I1202 20:34:03.168172  220500 oci.go:103] Successfully created a docker volume cert-expiration-182891
	I1202 20:34:03.168259  220500 cli_runner.go:164] Run: docker run --rm --name cert-expiration-182891-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-182891 --entrypoint /usr/bin/test -v cert-expiration-182891:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 20:34:03.703494  220500 oci.go:107] Successfully prepared a docker volume cert-expiration-182891
	I1202 20:34:03.703540  220500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:34:03.703548  220500 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 20:34:03.703630  220500 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-182891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 20:34:07.747357  220500 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-182891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.043694785s)
	I1202 20:34:07.747378  220500 kic.go:203] duration metric: took 4.043827778s to extract preloaded images to volume ...
	W1202 20:34:07.747514  220500 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 20:34:07.747613  220500 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 20:34:07.799775  220500 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-182891 --name cert-expiration-182891 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-182891 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-182891 --network cert-expiration-182891 --ip 192.168.85.2 --volume cert-expiration-182891:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 20:34:08.147474  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Running}}
	I1202 20:34:08.177640  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:08.201210  220500 cli_runner.go:164] Run: docker exec cert-expiration-182891 stat /var/lib/dpkg/alternatives/iptables
	I1202 20:34:08.251580  220500 oci.go:144] the created container "cert-expiration-182891" has a running status.
	I1202 20:34:08.251597  220500 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa...
	I1202 20:34:08.766846  220500 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 20:34:08.785120  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:08.803859  220500 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 20:34:08.803870  220500 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-182891 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 20:34:08.861699  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:08.879965  220500 machine.go:94] provisionDockerMachine start ...
	I1202 20:34:08.880051  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:08.897559  220500 main.go:143] libmachine: Using SSH client type: native
	I1202 20:34:08.897905  220500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1202 20:34:08.897912  220500 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:34:08.898618  220500 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57854->127.0.0.1:33038: read: connection reset by peer
	I1202 20:34:12.053548  220500 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-182891
	
	I1202 20:34:12.053563  220500 ubuntu.go:182] provisioning hostname "cert-expiration-182891"
	I1202 20:34:12.053641  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:12.073487  220500 main.go:143] libmachine: Using SSH client type: native
	I1202 20:34:12.073902  220500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1202 20:34:12.073920  220500 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-182891 && echo "cert-expiration-182891" | sudo tee /etc/hostname
	I1202 20:34:12.234793  220500 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-182891
	
	I1202 20:34:12.234860  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:12.252788  220500 main.go:143] libmachine: Using SSH client type: native
	I1202 20:34:12.253090  220500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1202 20:34:12.253104  220500 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-182891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-182891/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-182891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:34:12.397697  220500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:34:12.397713  220500 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 20:34:12.397737  220500 ubuntu.go:190] setting up certificates
	I1202 20:34:12.397745  220500 provision.go:84] configureAuth start
	I1202 20:34:12.397803  220500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182891
	I1202 20:34:12.414947  220500 provision.go:143] copyHostCerts
	I1202 20:34:12.415003  220500 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 20:34:12.415009  220500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 20:34:12.415085  220500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 20:34:12.415180  220500 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 20:34:12.415184  220500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 20:34:12.415214  220500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 20:34:12.415270  220500 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 20:34:12.415273  220500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 20:34:12.415296  220500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 20:34:12.415348  220500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-182891 san=[127.0.0.1 192.168.85.2 cert-expiration-182891 localhost minikube]
	I1202 20:34:12.449437  220500 provision.go:177] copyRemoteCerts
	I1202 20:34:12.449506  220500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:34:12.449553  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:12.467508  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:12.570364  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:34:12.589480  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 20:34:12.610183  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:34:12.626375  220500 provision.go:87] duration metric: took 228.59726ms to configureAuth
	I1202 20:34:12.626392  220500 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:34:12.626564  220500 config.go:182] Loaded profile config "cert-expiration-182891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:34:12.626659  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:12.643145  220500 main.go:143] libmachine: Using SSH client type: native
	I1202 20:34:12.643450  220500 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1202 20:34:12.643461  220500 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:34:12.953111  220500 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:34:12.953123  220500 machine.go:97] duration metric: took 4.073146547s to provisionDockerMachine
	I1202 20:34:12.953133  220500 client.go:176] duration metric: took 9.943421902s to LocalClient.Create
	I1202 20:34:12.953144  220500 start.go:167] duration metric: took 9.943508209s to libmachine.API.Create "cert-expiration-182891"
	I1202 20:34:12.953150  220500 start.go:293] postStartSetup for "cert-expiration-182891" (driver="docker")
	I1202 20:34:12.953159  220500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:34:12.953231  220500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:34:12.953273  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:12.971088  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:13.077578  220500 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:34:13.080710  220500 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:34:13.080731  220500 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:34:13.080740  220500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 20:34:13.080793  220500 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 20:34:13.080869  220500 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 20:34:13.080968  220500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:34:13.088088  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:34:13.104866  220500 start.go:296] duration metric: took 151.703544ms for postStartSetup
	I1202 20:34:13.105221  220500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182891
	I1202 20:34:13.121515  220500 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/config.json ...
	I1202 20:34:13.121818  220500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:34:13.121859  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:13.137980  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:13.238271  220500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:34:13.242645  220500 start.go:128] duration metric: took 10.23751631s to createHost
	I1202 20:34:13.242659  220500 start.go:83] releasing machines lock for "cert-expiration-182891", held for 10.237652157s
	I1202 20:34:13.242734  220500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-182891
	I1202 20:34:13.258990  220500 ssh_runner.go:195] Run: cat /version.json
	I1202 20:34:13.259030  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:13.259066  220500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:34:13.259125  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:13.277344  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:13.280175  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:13.389627  220500 ssh_runner.go:195] Run: systemctl --version
	I1202 20:34:13.479094  220500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:34:13.516699  220500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:34:13.520929  220500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:34:13.520988  220500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:34:13.549026  220500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 20:34:13.549038  220500 start.go:496] detecting cgroup driver to use...
	I1202 20:34:13.549069  220500 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 20:34:13.549115  220500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:34:13.566720  220500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:34:13.579473  220500 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:34:13.579523  220500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:34:13.597295  220500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:34:13.615651  220500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:34:13.735776  220500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:34:13.847238  220500 docker.go:234] disabling docker service ...
	I1202 20:34:13.847299  220500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:34:13.868252  220500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:34:13.881442  220500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:34:13.992861  220500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:34:14.141483  220500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:34:14.155385  220500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:34:14.169339  220500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:34:14.169402  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.179230  220500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:34:14.179285  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.188210  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.196751  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.205264  220500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:34:14.213141  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.221146  220500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.233873  220500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:34:14.242440  220500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:34:14.249828  220500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:34:14.257147  220500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:34:14.368228  220500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:34:14.532846  220500 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:34:14.532906  220500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:34:14.536721  220500 start.go:564] Will wait 60s for crictl version
	I1202 20:34:14.536774  220500 ssh_runner.go:195] Run: which crictl
	I1202 20:34:14.540318  220500 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:34:14.567301  220500 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:34:14.567424  220500 ssh_runner.go:195] Run: crio --version
	I1202 20:34:14.595679  220500 ssh_runner.go:195] Run: crio --version
	I1202 20:34:14.624854  220500 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:34:14.627633  220500 cli_runner.go:164] Run: docker network inspect cert-expiration-182891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:34:14.643699  220500 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:34:14.647521  220500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:34:14.656720  220500 kubeadm.go:884] updating cluster {Name:cert-expiration-182891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-182891 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:34:14.656818  220500 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:34:14.656869  220500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:34:14.691183  220500 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:34:14.691194  220500 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:34:14.691245  220500 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:34:14.716300  220500 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:34:14.716312  220500 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:34:14.716318  220500 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 20:34:14.716396  220500 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-182891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-182891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:34:14.716468  220500 ssh_runner.go:195] Run: crio config
	I1202 20:34:14.782796  220500 cni.go:84] Creating CNI manager for ""
	I1202 20:34:14.782806  220500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:34:14.782830  220500 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:34:14.782851  220500 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-182891 NodeName:cert-expiration-182891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:34:14.782966  220500 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-182891"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:34:14.783034  220500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:34:14.790880  220500 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:34:14.790956  220500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:34:14.799665  220500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1202 20:34:14.812871  220500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:34:14.825736  220500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1202 20:34:14.838537  220500 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:34:14.842256  220500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:34:14.851135  220500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:34:14.956495  220500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:34:14.971391  220500 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891 for IP: 192.168.85.2
	I1202 20:34:14.971401  220500 certs.go:195] generating shared ca certs ...
	I1202 20:34:14.971414  220500 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:14.971819  220500 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 20:34:14.971887  220500 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 20:34:14.971894  220500 certs.go:257] generating profile certs ...
	I1202 20:34:14.971965  220500 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.key
	I1202 20:34:14.971976  220500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.crt with IP's: []
	I1202 20:34:15.366804  220500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.crt ...
	I1202 20:34:15.366821  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.crt: {Name:mk710e184bb03e58c5617d8df70a2e115d2aafe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.367041  220500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.key ...
	I1202 20:34:15.367051  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/client.key: {Name:mk97257f192df6b2102a73f10c8607aa3f2614ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.367152  220500 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key.151f6042
	I1202 20:34:15.367165  220500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt.151f6042 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 20:34:15.597905  220500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt.151f6042 ...
	I1202 20:34:15.597922  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt.151f6042: {Name:mk3aa245eb469834212de5673e956eb2ec6b3181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.598088  220500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key.151f6042 ...
	I1202 20:34:15.598095  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key.151f6042: {Name:mkb54a8b0acb48ae1b7fcea6ee9340770c30b3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.598163  220500 certs.go:382] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt.151f6042 -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt
	I1202 20:34:15.598234  220500 certs.go:386] copying /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key.151f6042 -> /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key
	I1202 20:34:15.598300  220500 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.key
	I1202 20:34:15.598311  220500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.crt with IP's: []
	I1202 20:34:15.884943  220500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.crt ...
	I1202 20:34:15.884957  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.crt: {Name:mk7a90a59fe4f05340f5e2e113f159060d8b32d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.885140  220500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.key ...
	I1202 20:34:15.885148  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.key: {Name:mk8f4b7e144c4869d72384f444f923706690c975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:15.885344  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 20:34:15.885382  220500 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 20:34:15.885388  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:34:15.885415  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:34:15.885447  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:34:15.885473  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 20:34:15.885526  220500 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:34:15.886146  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:34:15.904859  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:34:15.923381  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:34:15.942621  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 20:34:15.961089  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 20:34:15.978991  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:34:15.996476  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:34:16.017696  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/cert-expiration-182891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:34:16.035999  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 20:34:16.053852  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:34:16.071770  220500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 20:34:16.090104  220500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:34:16.103165  220500 ssh_runner.go:195] Run: openssl version
	I1202 20:34:16.109235  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:34:16.117306  220500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:34:16.120851  220500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:34:16.120907  220500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:34:16.162198  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:34:16.170472  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 20:34:16.178540  220500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 20:34:16.182074  220500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 20:34:16.182138  220500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 20:34:16.222993  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 20:34:16.231719  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 20:34:16.239879  220500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 20:34:16.243828  220500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 20:34:16.243881  220500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 20:34:16.287785  220500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:34:16.297290  220500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:34:16.302137  220500 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:34:16.302188  220500 kubeadm.go:401] StartCluster: {Name:cert-expiration-182891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-182891 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:34:16.302258  220500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:34:16.302319  220500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:34:16.334635  220500 cri.go:89] found id: ""
	I1202 20:34:16.334693  220500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:34:16.347411  220500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:34:16.356449  220500 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:34:16.356500  220500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:34:16.364668  220500 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:34:16.364677  220500 kubeadm.go:158] found existing configuration files:
	
	I1202 20:34:16.364724  220500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:34:16.372537  220500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:34:16.372605  220500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:34:16.379991  220500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:34:16.387582  220500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:34:16.387637  220500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:34:16.395040  220500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:34:16.402766  220500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:34:16.402820  220500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:34:16.410489  220500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:34:16.418403  220500 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:34:16.418456  220500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:34:16.425881  220500 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:34:16.505336  220500 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1202 20:34:16.505562  220500 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:34:16.576682  220500 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:34:32.062382  220500 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 20:34:32.062431  220500 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:34:32.062517  220500 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:34:32.062577  220500 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 20:34:32.062610  220500 kubeadm.go:319] OS: Linux
	I1202 20:34:32.062654  220500 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:34:32.062701  220500 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 20:34:32.062747  220500 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:34:32.062794  220500 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:34:32.062841  220500 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:34:32.062888  220500 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:34:32.062932  220500 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:34:32.062978  220500 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:34:32.063023  220500 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 20:34:32.063106  220500 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:34:32.063200  220500 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:34:32.063289  220500 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:34:32.063351  220500 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:34:32.066329  220500 out.go:252]   - Generating certificates and keys ...
	I1202 20:34:32.066409  220500 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:34:32.066473  220500 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:34:32.066547  220500 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 20:34:32.066606  220500 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 20:34:32.066665  220500 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 20:34:32.066714  220500 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 20:34:32.066767  220500 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 20:34:32.066891  220500 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-182891 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 20:34:32.066942  220500 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 20:34:32.067069  220500 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-182891 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 20:34:32.067133  220500 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 20:34:32.067195  220500 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 20:34:32.067238  220500 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 20:34:32.067292  220500 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:34:32.067341  220500 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:34:32.067398  220500 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:34:32.067452  220500 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:34:32.067514  220500 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:34:32.067567  220500 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:34:32.067647  220500 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:34:32.067711  220500 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:34:32.070892  220500 out.go:252]   - Booting up control plane ...
	I1202 20:34:32.070998  220500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:34:32.071103  220500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:34:32.071181  220500 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:34:32.071289  220500 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:34:32.071381  220500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:34:32.071486  220500 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:34:32.071612  220500 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:34:32.071664  220500 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:34:32.071836  220500 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:34:32.071963  220500 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:34:32.072028  220500 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501557903s
	I1202 20:34:32.072126  220500 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 20:34:32.072212  220500 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1202 20:34:32.072308  220500 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 20:34:32.072391  220500 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 20:34:32.072471  220500 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.000780126s
	I1202 20:34:32.072542  220500 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.585334357s
	I1202 20:34:32.072621  220500 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501534367s
	I1202 20:34:32.072734  220500 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:34:32.072868  220500 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:34:32.072940  220500 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:34:32.073143  220500 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-182891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:34:32.073202  220500 kubeadm.go:319] [bootstrap-token] Using token: 29syhk.tg17gxdtsob85fdc
	I1202 20:34:32.076241  220500 out.go:252]   - Configuring RBAC rules ...
	I1202 20:34:32.076383  220500 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:34:32.076481  220500 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:34:32.076643  220500 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:34:32.076788  220500 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:34:32.076912  220500 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:34:32.077002  220500 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:34:32.077131  220500 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:34:32.077185  220500 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:34:32.077240  220500 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:34:32.077244  220500 kubeadm.go:319] 
	I1202 20:34:32.077304  220500 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:34:32.077307  220500 kubeadm.go:319] 
	I1202 20:34:32.077395  220500 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:34:32.077399  220500 kubeadm.go:319] 
	I1202 20:34:32.077436  220500 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:34:32.077502  220500 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:34:32.077552  220500 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:34:32.077555  220500 kubeadm.go:319] 
	I1202 20:34:32.077608  220500 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:34:32.077610  220500 kubeadm.go:319] 
	I1202 20:34:32.077852  220500 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:34:32.077856  220500 kubeadm.go:319] 
	I1202 20:34:32.077915  220500 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:34:32.077988  220500 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:34:32.078060  220500 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:34:32.078063  220500 kubeadm.go:319] 
	I1202 20:34:32.078146  220500 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:34:32.078220  220500 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:34:32.078223  220500 kubeadm.go:319] 
	I1202 20:34:32.078312  220500 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 29syhk.tg17gxdtsob85fdc \
	I1202 20:34:32.078427  220500 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b \
	I1202 20:34:32.078447  220500 kubeadm.go:319] 	--control-plane 
	I1202 20:34:32.078449  220500 kubeadm.go:319] 
	I1202 20:34:32.078532  220500 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:34:32.078535  220500 kubeadm.go:319] 
	I1202 20:34:32.078615  220500 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 29syhk.tg17gxdtsob85fdc \
	I1202 20:34:32.078729  220500 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:04aaaaae77b68f960c0a9ced9ec2515a576e5d33be14c52dd78ac859fdceb88b 
	I1202 20:34:32.078737  220500 cni.go:84] Creating CNI manager for ""
	I1202 20:34:32.078743  220500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:34:32.083663  220500 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1202 20:34:32.086744  220500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 20:34:32.091043  220500 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1202 20:34:32.091053  220500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 20:34:32.117388  220500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 20:34:32.412294  220500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:34:32.412421  220500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:34:32.412491  220500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-182891 minikube.k8s.io/updated_at=2025_12_02T20_34_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689 minikube.k8s.io/name=cert-expiration-182891 minikube.k8s.io/primary=true
	I1202 20:34:32.628832  220500 ops.go:34] apiserver oom_adj: -16
	I1202 20:34:32.628848  220500 kubeadm.go:1114] duration metric: took 216.478134ms to wait for elevateKubeSystemPrivileges
	I1202 20:34:32.628873  220500 kubeadm.go:403] duration metric: took 16.326688697s to StartCluster
	I1202 20:34:32.628888  220500 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:32.628965  220500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:34:32.629907  220500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:34:32.630125  220500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:34:32.630143  220500 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:34:32.630362  220500 config.go:182] Loaded profile config "cert-expiration-182891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:34:32.630396  220500 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:34:32.630458  220500 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-182891"
	I1202 20:34:32.630471  220500 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-182891"
	I1202 20:34:32.630492  220500 host.go:66] Checking if "cert-expiration-182891" exists ...
	I1202 20:34:32.630942  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:32.631401  220500 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-182891"
	I1202 20:34:32.631417  220500 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-182891"
	I1202 20:34:32.631690  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:32.635359  220500 out.go:179] * Verifying Kubernetes components...
	I1202 20:34:32.641922  220500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:34:32.670211  220500 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-182891"
	I1202 20:34:32.670238  220500 host.go:66] Checking if "cert-expiration-182891" exists ...
	I1202 20:34:32.670676  220500 cli_runner.go:164] Run: docker container inspect cert-expiration-182891 --format={{.State.Status}}
	I1202 20:34:32.689086  220500 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:34:32.694407  220500 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:34:32.694419  220500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:34:32.694485  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:32.697793  220500 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:34:32.697805  220500 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:34:32.697884  220500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-182891
	I1202 20:34:32.731780  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:32.739421  220500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/cert-expiration-182891/id_rsa Username:docker}
	I1202 20:34:32.927891  220500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:34:32.961532  220500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:34:32.981291  220500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:34:33.021612  220500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:34:33.305430  220500 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1202 20:34:33.307316  220500 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:34:33.307360  220500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:34:33.624662  220500 api_server.go:72] duration metric: took 994.493029ms to wait for apiserver process to appear ...
	I1202 20:34:33.624695  220500 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:34:33.624715  220500 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:34:33.628019  220500 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1202 20:34:33.631141  220500 addons.go:530] duration metric: took 1.000735035s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1202 20:34:33.639849  220500 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 20:34:33.640821  220500 api_server.go:141] control plane version: v1.34.2
	I1202 20:34:33.640834  220500 api_server.go:131] duration metric: took 16.133277ms to wait for apiserver health ...
	I1202 20:34:33.640841  220500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:34:33.643868  220500 system_pods.go:59] 5 kube-system pods found
	I1202 20:34:33.643887  220500 system_pods.go:61] "etcd-cert-expiration-182891" [d7146353-efc6-4557-bb0e-eb498668a61f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:34:33.643896  220500 system_pods.go:61] "kube-apiserver-cert-expiration-182891" [6777941a-35e9-4ed3-a52a-097202813d38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:34:33.643902  220500 system_pods.go:61] "kube-controller-manager-cert-expiration-182891" [fda263e1-06a2-471e-89b4-e1ff33d4b5fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:34:33.643907  220500 system_pods.go:61] "kube-scheduler-cert-expiration-182891" [26a1df9c-85c0-4d0f-bed8-05f055fe2f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:34:33.643911  220500 system_pods.go:61] "storage-provisioner" [31ad6494-d1c3-4944-8ac8-c195287e3b8f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1202 20:34:33.643918  220500 system_pods.go:74] duration metric: took 3.070895ms to wait for pod list to return data ...
	I1202 20:34:33.643926  220500 kubeadm.go:587] duration metric: took 1.013763916s to wait for: map[apiserver:true system_pods:true]
	I1202 20:34:33.643937  220500 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:34:33.646581  220500 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 20:34:33.646600  220500 node_conditions.go:123] node cpu capacity is 2
	I1202 20:34:33.646609  220500 node_conditions.go:105] duration metric: took 2.669446ms to run NodePressure ...
	I1202 20:34:33.646619  220500 start.go:242] waiting for startup goroutines ...
	I1202 20:34:33.809205  220500 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-182891" context rescaled to 1 replicas
	I1202 20:34:33.809227  220500 start.go:247] waiting for cluster config update ...
	I1202 20:34:33.809238  220500 start.go:256] writing updated cluster config ...
	I1202 20:34:33.809544  220500 ssh_runner.go:195] Run: rm -f paused
	I1202 20:34:33.870196  220500 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 20:34:33.873311  220500 out.go:179] * Done! kubectl is now configured to use "cert-expiration-182891" cluster and "default" namespace by default
	I1202 20:36:25.795730  181375 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001173339s
	I1202 20:36:25.795770  181375 kubeadm.go:319] 
	I1202 20:36:25.795829  181375 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 20:36:25.795864  181375 kubeadm.go:319] 	- The kubelet is not running
	I1202 20:36:25.795968  181375 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 20:36:25.795974  181375 kubeadm.go:319] 
	I1202 20:36:25.796076  181375 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 20:36:25.796107  181375 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 20:36:25.796140  181375 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 20:36:25.796145  181375 kubeadm.go:319] 
	I1202 20:36:25.799726  181375 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:36:25.800158  181375 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 20:36:25.800274  181375 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:36:25.800520  181375 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 20:36:25.800530  181375 kubeadm.go:319] 
	I1202 20:36:25.800600  181375 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 20:36:25.800655  181375 kubeadm.go:403] duration metric: took 12m7.006146482s to StartCluster
	I1202 20:36:25.800690  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:36:25.800752  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:36:25.827429  181375 cri.go:89] found id: ""
	I1202 20:36:25.827451  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.827459  181375 logs.go:284] No container was found matching "kube-apiserver"
	I1202 20:36:25.827465  181375 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:36:25.827527  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:36:25.854803  181375 cri.go:89] found id: ""
	I1202 20:36:25.854827  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.854835  181375 logs.go:284] No container was found matching "etcd"
	I1202 20:36:25.854842  181375 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:36:25.854910  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:36:25.879772  181375 cri.go:89] found id: ""
	I1202 20:36:25.879797  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.879806  181375 logs.go:284] No container was found matching "coredns"
	I1202 20:36:25.879813  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:36:25.879867  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:36:25.904946  181375 cri.go:89] found id: ""
	I1202 20:36:25.904967  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.904975  181375 logs.go:284] No container was found matching "kube-scheduler"
	I1202 20:36:25.904982  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:36:25.905047  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:36:25.930544  181375 cri.go:89] found id: ""
	I1202 20:36:25.930567  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.930576  181375 logs.go:284] No container was found matching "kube-proxy"
	I1202 20:36:25.930582  181375 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:36:25.930636  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:36:25.959586  181375 cri.go:89] found id: ""
	I1202 20:36:25.959608  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.959617  181375 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 20:36:25.959623  181375 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:36:25.959679  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:36:25.984698  181375 cri.go:89] found id: ""
	I1202 20:36:25.984721  181375 logs.go:282] 0 containers: []
	W1202 20:36:25.984729  181375 logs.go:284] No container was found matching "kindnet"
	I1202 20:36:25.984735  181375 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:36:25.984789  181375 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:36:26.013174  181375 cri.go:89] found id: ""
	I1202 20:36:26.013199  181375 logs.go:282] 0 containers: []
	W1202 20:36:26.013208  181375 logs.go:284] No container was found matching "storage-provisioner"
	I1202 20:36:26.013218  181375 logs.go:123] Gathering logs for kubelet ...
	I1202 20:36:26.013229  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:36:26.091680  181375 logs.go:123] Gathering logs for dmesg ...
	I1202 20:36:26.091722  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:36:26.107966  181375 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:36:26.107996  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:36:26.175717  181375 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:36:26.175770  181375 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:36:26.175810  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:36:26.217078  181375 logs.go:123] Gathering logs for container status ...
	I1202 20:36:26.217112  181375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 20:36:26.245756  181375 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 20:36:26.245810  181375 out.go:285] * 
	W1202 20:36:26.245878  181375 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 20:36:26.245907  181375 out.go:285] * 
	W1202 20:36:26.248332  181375 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:36:26.255135  181375 out.go:203] 
	W1202 20:36:26.258046  181375 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001173339s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 20:36:26.258087  181375 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 20:36:26.258109  181375 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 20:36:26.261088  181375 out.go:203] 
	
	
	==> CRI-O <==
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.206998804Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-beta.0 found" id=35530152-33eb-46a7-83c9-4d796d98fef3 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.209816036Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8d3968f9-9212-4953-8a5c-0fc119a04578 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.209975284Z" level=info msg="Image registry.k8s.io/coredns/coredns:v1.13.1 not found" id=8d3968f9-9212-4953-8a5c-0fc119a04578 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.210111904Z" level=info msg="Neither image nor artfiact registry.k8s.io/coredns/coredns:v1.13.1 found" id=8d3968f9-9212-4953-8a5c-0fc119a04578 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.249178291Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0ff55a56-9e90-47c3-9eb2-1dfc0699aa58 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.249321803Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=0ff55a56-9e90-47c3-9eb2-1dfc0699aa58 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.249360136Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=0ff55a56-9e90-47c3-9eb2-1dfc0699aa58 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.266290656Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=293cafd4-c935-4af5-a7a9-b2a742603ddf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.266421409Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-beta.0 not found" id=293cafd4-c935-4af5-a7a9-b2a742603ddf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:02 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:02.266457856Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-beta.0 found" id=293cafd4-c935-4af5-a7a9-b2a742603ddf name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:24:03 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:24:03.26993082Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4eb82f52-daaa-449c-9716-63533ce43965 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.04785741Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5012ecb6-d6a5-47d0-b06f-d94f919af00a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.052733874Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=8dd90d60-ead4-461c-8840-5f8c991f7242 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.054594473Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5d455bd4-205b-4fc9-876c-ec51f806db90 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.059510058Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6c3beeb0-6830-4354-b613-4ad1a42d948c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.060655962Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8bee595e-7aa3-498f-afe9-583b1a181085 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.062126312Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=289dcd98-17cc-4f25-8716-b57cdf51e0c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:28:22 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:28:22.064339073Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c2665817-248b-40f7-8bc2-810ee130fa78 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.255583999Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=4e015d3d-5cd3-42f4-9008-c6e73d43a07e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.25726597Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=084a9d63-2a6d-4ad5-8a52-1c7e81b97fe8 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.259266849Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0cb06ca5-9473-4f92-8bf5-db8964a08829 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.261041043Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f7d6bdb6-1813-4f42-b66b-62cfe47ab98e name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.262208402Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=97f752b6-df19-4493-88a5-e5a102c4191c name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.263790971Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6b620929-2532-4bae-9bbc-bc9e796d6e17 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 20:32:24 kubernetes-upgrade-080046 crio[614]: time="2025-12-02T20:32:24.264760503Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7782b435-6178-4de1-8cbf-7c925187d64f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:02] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:04] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:09] overlayfs: idmapped layers are currently not supported
	[ +31.785180] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:13] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:14] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:36:27 up  2:18,  0 user,  load average: 0.58, 1.51, 1.69
	Linux kubernetes-upgrade-080046 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 20:36:25 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12853]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12853]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12853]: E1202 20:36:26.119055   12853 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12881]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12881]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:26 kubernetes-upgrade-080046 kubelet[12881]: E1202 20:36:26.856126   12881 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 20:36:26 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 20:36:27 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 02 20:36:27 kubernetes-upgrade-080046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:27 kubernetes-upgrade-080046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 20:36:27 kubernetes-upgrade-080046 kubelet[12943]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:27 kubernetes-upgrade-080046 kubelet[12943]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 02 20:36:27 kubernetes-upgrade-080046 kubelet[12943]: E1202 20:36:27.606763   12943 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 20:36:27 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 20:36:27 kubernetes-upgrade-080046 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-080046 -n kubernetes-upgrade-080046
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-080046 -n kubernetes-upgrade-080046: exit status 2 (332.13833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-080046" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-080046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-080046
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-080046: (2.21417054s)
--- FAIL: TestKubernetesUpgrade (794.48s)

                                                
                                    
x
+
TestPause/serial/Pause (6.23s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-774682 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-774682 --alsologtostderr -v=5: exit status 80 (1.768758219s)

                                                
                                                
-- stdout --
	* Pausing node pause-774682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:32:32.911589  212580 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:32:32.912671  212580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:32.912855  212580 out.go:374] Setting ErrFile to fd 2...
	I1202 20:32:32.912876  212580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:32.913174  212580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:32:32.913475  212580 out.go:368] Setting JSON to false
	I1202 20:32:32.913523  212580 mustload.go:66] Loading cluster: pause-774682
	I1202 20:32:32.914010  212580 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:32.914526  212580 cli_runner.go:164] Run: docker container inspect pause-774682 --format={{.State.Status}}
	I1202 20:32:32.931754  212580 host.go:66] Checking if "pause-774682" exists ...
	I1202 20:32:32.932192  212580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:32:33.000154  212580 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:32:32.990947537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:32:33.000784  212580 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-774682 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1202 20:32:33.006137  212580 out.go:179] * Pausing node pause-774682 ... 
	I1202 20:32:33.010094  212580 host.go:66] Checking if "pause-774682" exists ...
	I1202 20:32:33.010459  212580 ssh_runner.go:195] Run: systemctl --version
	I1202 20:32:33.010511  212580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:33.029768  212580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:33.132318  212580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:33.145135  212580 pause.go:52] kubelet running: true
	I1202 20:32:33.145208  212580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:32:33.372748  212580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:32:33.372837  212580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:32:33.445307  212580 cri.go:89] found id: "18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6"
	I1202 20:32:33.445331  212580 cri.go:89] found id: "0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1"
	I1202 20:32:33.445348  212580 cri.go:89] found id: "720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b"
	I1202 20:32:33.445352  212580 cri.go:89] found id: "40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45"
	I1202 20:32:33.445356  212580 cri.go:89] found id: "8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d"
	I1202 20:32:33.445359  212580 cri.go:89] found id: "ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb"
	I1202 20:32:33.445363  212580 cri.go:89] found id: "1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1"
	I1202 20:32:33.445366  212580 cri.go:89] found id: "749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	I1202 20:32:33.445369  212580 cri.go:89] found id: "155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975"
	I1202 20:32:33.445376  212580 cri.go:89] found id: "2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29"
	I1202 20:32:33.445383  212580 cri.go:89] found id: "f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	I1202 20:32:33.445386  212580 cri.go:89] found id: "88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7"
	I1202 20:32:33.445390  212580 cri.go:89] found id: "d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc"
	I1202 20:32:33.445395  212580 cri.go:89] found id: "12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c"
	I1202 20:32:33.445401  212580 cri.go:89] found id: ""
	I1202 20:32:33.445460  212580 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:32:33.456363  212580 retry.go:31] will retry after 212.543184ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:33Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:32:33.669823  212580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:33.683047  212580 pause.go:52] kubelet running: false
	I1202 20:32:33.683122  212580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:32:33.820122  212580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:32:33.820194  212580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:32:33.886457  212580 cri.go:89] found id: "18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6"
	I1202 20:32:33.886482  212580 cri.go:89] found id: "0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1"
	I1202 20:32:33.886488  212580 cri.go:89] found id: "720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b"
	I1202 20:32:33.886491  212580 cri.go:89] found id: "40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45"
	I1202 20:32:33.886494  212580 cri.go:89] found id: "8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d"
	I1202 20:32:33.886497  212580 cri.go:89] found id: "ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb"
	I1202 20:32:33.886500  212580 cri.go:89] found id: "1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1"
	I1202 20:32:33.886515  212580 cri.go:89] found id: "749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	I1202 20:32:33.886528  212580 cri.go:89] found id: "155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975"
	I1202 20:32:33.886534  212580 cri.go:89] found id: "2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29"
	I1202 20:32:33.886537  212580 cri.go:89] found id: "f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	I1202 20:32:33.886540  212580 cri.go:89] found id: "88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7"
	I1202 20:32:33.886543  212580 cri.go:89] found id: "d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc"
	I1202 20:32:33.886546  212580 cri.go:89] found id: "12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c"
	I1202 20:32:33.886549  212580 cri.go:89] found id: ""
	I1202 20:32:33.886597  212580 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:32:33.897368  212580 retry.go:31] will retry after 492.967737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:33Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:32:34.390870  212580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:34.403424  212580 pause.go:52] kubelet running: false
	I1202 20:32:34.403487  212580 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1202 20:32:34.541259  212580 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1202 20:32:34.541340  212580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1202 20:32:34.606978  212580 cri.go:89] found id: "18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6"
	I1202 20:32:34.607001  212580 cri.go:89] found id: "0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1"
	I1202 20:32:34.607006  212580 cri.go:89] found id: "720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b"
	I1202 20:32:34.607010  212580 cri.go:89] found id: "40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45"
	I1202 20:32:34.607013  212580 cri.go:89] found id: "8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d"
	I1202 20:32:34.607017  212580 cri.go:89] found id: "ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb"
	I1202 20:32:34.607020  212580 cri.go:89] found id: "1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1"
	I1202 20:32:34.607023  212580 cri.go:89] found id: "749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	I1202 20:32:34.607026  212580 cri.go:89] found id: "155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975"
	I1202 20:32:34.607035  212580 cri.go:89] found id: "2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29"
	I1202 20:32:34.607039  212580 cri.go:89] found id: "f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	I1202 20:32:34.607042  212580 cri.go:89] found id: "88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7"
	I1202 20:32:34.607045  212580 cri.go:89] found id: "d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc"
	I1202 20:32:34.607052  212580 cri.go:89] found id: "12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c"
	I1202 20:32:34.607058  212580 cri.go:89] found id: ""
	I1202 20:32:34.607107  212580 ssh_runner.go:195] Run: sudo runc list -f json
	I1202 20:32:34.621117  212580 out.go:203] 
	W1202 20:32:34.624139  212580 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1202 20:32:34.624166  212580 out.go:285] * 
	* 
	W1202 20:32:34.630057  212580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 20:32:34.633360  212580 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-774682 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-774682
helpers_test.go:243: (dbg) docker inspect pause-774682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57",
	        "Created": "2025-12-02T20:30:51.439495125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:30:51.526557467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/hostname",
	        "HostsPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/hosts",
	        "LogPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57-json.log",
	        "Name": "/pause-774682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-774682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-774682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57",
	                "LowerDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-774682",
	                "Source": "/var/lib/docker/volumes/pause-774682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-774682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-774682",
	                "name.minikube.sigs.k8s.io": "pause-774682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d22a15167e9a4f3134d1ec0b3d734cd54b8e290c10c12cc6f1cd11640552245d",
	            "SandboxKey": "/var/run/docker/netns/d22a15167e9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-774682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:67:3c:c4:c7:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dce15c848469907113b6d5a204240ebd31ad2ddccf9a79c0dd47371856ca1472",
	                    "EndpointID": "4e4db4557cc85219dd93723fce0dc7082272023484878f51365b14eea906a7d9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-774682",
	                        "2108796dc76e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-774682 -n pause-774682
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-774682 -n pause-774682: exit status 2 (367.028282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-774682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-774682 logs -n 25: (1.350446967s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-778048 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ start   │ -p missing-upgrade-210819 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-210819    │ jenkins │ v1.35.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ delete  │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ start   │ -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:23 UTC │
	│ ssh     │ -p NoKubernetes-778048 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ stop    │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p NoKubernetes-778048 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ ssh     │ -p NoKubernetes-778048 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ delete  │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p missing-upgrade-210819 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-210819    │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:24 UTC │
	│ stop    │ -p kubernetes-upgrade-080046                                                                                                                    │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ delete  │ -p missing-upgrade-210819                                                                                                                       │ missing-upgrade-210819    │ jenkins │ v1.37.0 │ 02 Dec 25 20:24 UTC │ 02 Dec 25 20:24 UTC │
	│ start   │ -p stopped-upgrade-085945 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-085945    │ jenkins │ v1.35.0 │ 02 Dec 25 20:24 UTC │ 02 Dec 25 20:25 UTC │
	│ stop    │ stopped-upgrade-085945 stop                                                                                                                     │ stopped-upgrade-085945    │ jenkins │ v1.35.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:25 UTC │
	│ start   │ -p stopped-upgrade-085945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-085945    │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:29 UTC │
	│ delete  │ -p stopped-upgrade-085945                                                                                                                       │ stopped-upgrade-085945    │ jenkins │ v1.37.0 │ 02 Dec 25 20:29 UTC │ 02 Dec 25 20:29 UTC │
	│ start   │ -p running-upgrade-568729 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-568729    │ jenkins │ v1.35.0 │ 02 Dec 25 20:29 UTC │ 02 Dec 25 20:30 UTC │
	│ start   │ -p running-upgrade-568729 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-568729    │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:30 UTC │
	│ delete  │ -p running-upgrade-568729                                                                                                                       │ running-upgrade-568729    │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:30 UTC │
	│ start   │ -p pause-774682 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:32 UTC │
	│ start   │ -p pause-774682 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │ 02 Dec 25 20:32 UTC │
	│ pause   │ -p pause-774682 --alsologtostderr -v=5                                                                                                          │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:32:05.866145  211223 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:32:05.866676  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:05.866711  211223 out.go:374] Setting ErrFile to fd 2...
	I1202 20:32:05.866733  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:05.867033  211223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:32:05.867450  211223 out.go:368] Setting JSON to false
	I1202 20:32:05.868463  211223 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8064,"bootTime":1764699462,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:32:05.868560  211223 start.go:143] virtualization:  
	I1202 20:32:05.873810  211223 out.go:179] * [pause-774682] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:32:05.877128  211223 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 20:32:05.877215  211223 notify.go:221] Checking for updates...
	I1202 20:32:05.880883  211223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:32:05.883894  211223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:32:05.886828  211223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 20:32:05.889691  211223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 20:32:05.892605  211223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:32:05.895949  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:05.896524  211223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:32:05.925781  211223 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:32:05.925897  211223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:32:05.993965  211223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:32:05.984225051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:32:05.994073  211223 docker.go:319] overlay module found
	I1202 20:32:05.997225  211223 out.go:179] * Using the docker driver based on existing profile
	I1202 20:32:06.000111  211223 start.go:309] selected driver: docker
	I1202 20:32:06.000136  211223 start.go:927] validating driver "docker" against &{Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:06.000272  211223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:32:06.000375  211223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:32:06.061731  211223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:32:06.052662654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:32:06.062155  211223 cni.go:84] Creating CNI manager for ""
	I1202 20:32:06.062234  211223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:32:06.062282  211223 start.go:353] cluster config:
	{Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:06.065595  211223 out.go:179] * Starting "pause-774682" primary control-plane node in "pause-774682" cluster
	I1202 20:32:06.068447  211223 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:32:06.071585  211223 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:32:06.074323  211223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:32:06.074370  211223 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 20:32:06.074380  211223 cache.go:65] Caching tarball of preloaded images
	I1202 20:32:06.074410  211223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:32:06.074502  211223 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 20:32:06.074512  211223 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:32:06.074643  211223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/config.json ...
	I1202 20:32:06.099450  211223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:32:06.099471  211223 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:32:06.099487  211223 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:32:06.099521  211223 start.go:360] acquireMachinesLock for pause-774682: {Name:mk542181bd319b24dbfd31147451cd023cc98a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:32:06.099579  211223 start.go:364] duration metric: took 36.339µs to acquireMachinesLock for "pause-774682"
	I1202 20:32:06.099615  211223 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:32:06.099622  211223 fix.go:54] fixHost starting: 
	I1202 20:32:06.099877  211223 cli_runner.go:164] Run: docker container inspect pause-774682 --format={{.State.Status}}
	I1202 20:32:06.121602  211223 fix.go:112] recreateIfNeeded on pause-774682: state=Running err=<nil>
	W1202 20:32:06.121635  211223 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:32:06.124883  211223 out.go:252] * Updating the running docker "pause-774682" container ...
	I1202 20:32:06.124923  211223 machine.go:94] provisionDockerMachine start ...
	I1202 20:32:06.125025  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.143440  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.143772  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.143788  211223 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:32:06.297495  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-774682
	
	I1202 20:32:06.297522  211223 ubuntu.go:182] provisioning hostname "pause-774682"
	I1202 20:32:06.297584  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.315372  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.315688  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.315703  211223 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-774682 && echo "pause-774682" | sudo tee /etc/hostname
	I1202 20:32:06.481284  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-774682
	
	I1202 20:32:06.481391  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.499766  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.500093  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.500118  211223 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-774682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-774682/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-774682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:32:06.653832  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:32:06.653857  211223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 20:32:06.653877  211223 ubuntu.go:190] setting up certificates
	I1202 20:32:06.653887  211223 provision.go:84] configureAuth start
	I1202 20:32:06.653969  211223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-774682
	I1202 20:32:06.672988  211223 provision.go:143] copyHostCerts
	I1202 20:32:06.673086  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 20:32:06.673106  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 20:32:06.673188  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 20:32:06.673326  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 20:32:06.673345  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 20:32:06.673389  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 20:32:06.673466  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 20:32:06.673480  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 20:32:06.673509  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 20:32:06.673587  211223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.pause-774682 san=[127.0.0.1 192.168.85.2 localhost minikube pause-774682]
	I1202 20:32:06.771681  211223 provision.go:177] copyRemoteCerts
	I1202 20:32:06.771753  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:32:06.771797  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.791984  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:06.897818  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:32:06.916404  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1202 20:32:06.935259  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:32:06.961621  211223 provision.go:87] duration metric: took 307.711118ms to configureAuth
	I1202 20:32:06.961649  211223 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:32:06.961908  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:06.962018  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.979703  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.980020  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.980040  211223 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:32:12.364345  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:32:12.364365  211223 machine.go:97] duration metric: took 6.239433951s to provisionDockerMachine
	I1202 20:32:12.364385  211223 start.go:293] postStartSetup for "pause-774682" (driver="docker")
	I1202 20:32:12.364397  211223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:32:12.364484  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:32:12.364525  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.382512  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.485587  211223 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:32:12.489032  211223 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:32:12.489062  211223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:32:12.489072  211223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 20:32:12.489127  211223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 20:32:12.489212  211223 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 20:32:12.489320  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:32:12.496981  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:32:12.514882  211223 start.go:296] duration metric: took 150.48082ms for postStartSetup
	I1202 20:32:12.514981  211223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:32:12.515051  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.532517  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.634670  211223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:32:12.639528  211223 fix.go:56] duration metric: took 6.53989967s for fixHost
	I1202 20:32:12.639557  211223 start.go:83] releasing machines lock for "pause-774682", held for 6.539965752s
	I1202 20:32:12.639621  211223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-774682
	I1202 20:32:12.657042  211223 ssh_runner.go:195] Run: cat /version.json
	I1202 20:32:12.657093  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.657094  211223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:32:12.657148  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.672846  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.675648  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.773220  211223 ssh_runner.go:195] Run: systemctl --version
	I1202 20:32:12.862430  211223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:32:12.901869  211223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:32:12.906111  211223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:32:12.906203  211223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:32:12.913708  211223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:32:12.913782  211223 start.go:496] detecting cgroup driver to use...
	I1202 20:32:12.913826  211223 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 20:32:12.913898  211223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:32:12.929618  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:32:12.942717  211223 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:32:12.942780  211223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:32:12.958294  211223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:32:12.971170  211223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:32:13.104612  211223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:32:13.240267  211223 docker.go:234] disabling docker service ...
	I1202 20:32:13.240338  211223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:32:13.255382  211223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:32:13.267881  211223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:32:13.423151  211223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:32:13.558016  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:32:13.571676  211223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:32:13.585922  211223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:32:13.585986  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.595079  211223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:32:13.595153  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.604222  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.612852  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.621834  211223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:32:13.630374  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.639155  211223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.647434  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.656446  211223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:32:13.664317  211223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:32:13.673987  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:13.803501  211223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:32:14.025439  211223 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:32:14.025556  211223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:32:14.029397  211223 start.go:564] Will wait 60s for crictl version
	I1202 20:32:14.029461  211223 ssh_runner.go:195] Run: which crictl
	I1202 20:32:14.033162  211223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:32:14.065715  211223 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:32:14.065804  211223 ssh_runner.go:195] Run: crio --version
	I1202 20:32:14.104143  211223 ssh_runner.go:195] Run: crio --version
	I1202 20:32:14.139188  211223 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:32:14.142064  211223 cli_runner.go:164] Run: docker network inspect pause-774682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:32:14.157702  211223 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:32:14.161566  211223 kubeadm.go:884] updating cluster {Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:32:14.161749  211223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:32:14.161812  211223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:32:14.199403  211223 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:32:14.199428  211223 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:32:14.199483  211223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:32:14.223705  211223 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:32:14.223729  211223 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:32:14.223737  211223 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 20:32:14.223831  211223 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-774682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:32:14.223904  211223 ssh_runner.go:195] Run: crio config
	I1202 20:32:14.280743  211223 cni.go:84] Creating CNI manager for ""
	I1202 20:32:14.280811  211223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:32:14.280846  211223 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:32:14.280895  211223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-774682 NodeName:pause-774682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:32:14.281060  211223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-774682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:32:14.281162  211223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:32:14.288506  211223 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:32:14.288609  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:32:14.296014  211223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 20:32:14.308293  211223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:32:14.320627  211223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 20:32:14.332719  211223 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:32:14.336609  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:14.461121  211223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:32:14.474823  211223 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682 for IP: 192.168.85.2
	I1202 20:32:14.474845  211223 certs.go:195] generating shared ca certs ...
	I1202 20:32:14.474863  211223 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:14.474993  211223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 20:32:14.475041  211223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 20:32:14.475053  211223 certs.go:257] generating profile certs ...
	I1202 20:32:14.475137  211223 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key
	I1202 20:32:14.475207  211223 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.key.ed7bff59
	I1202 20:32:14.475286  211223 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.key
	I1202 20:32:14.475403  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 20:32:14.475440  211223 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 20:32:14.475452  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:32:14.475478  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:32:14.475510  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:32:14.475541  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 20:32:14.475592  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:32:14.476218  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:32:14.494553  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:32:14.512228  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:32:14.529335  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 20:32:14.546413  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:32:14.563951  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:32:14.581817  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:32:14.599479  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:32:14.617200  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 20:32:14.634271  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:32:14.650783  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 20:32:14.667481  211223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:32:14.679372  211223 ssh_runner.go:195] Run: openssl version
	I1202 20:32:14.685334  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 20:32:14.693553  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.697172  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.697263  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.737724  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 20:32:14.745748  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 20:32:14.753801  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.757256  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.757316  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.799211  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:32:14.807113  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:32:14.815072  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.819226  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.819341  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.861435  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:32:14.870633  211223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:32:14.875051  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:32:14.920460  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:32:14.979467  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:32:15.045569  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:32:15.150845  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:32:15.240758  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:32:15.309058  211223 kubeadm.go:401] StartCluster: {Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:15.309245  211223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:32:15.309340  211223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:32:15.354301  211223 cri.go:89] found id: "18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6"
	I1202 20:32:15.354375  211223 cri.go:89] found id: "0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1"
	I1202 20:32:15.354394  211223 cri.go:89] found id: "720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b"
	I1202 20:32:15.354412  211223 cri.go:89] found id: "40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45"
	I1202 20:32:15.354445  211223 cri.go:89] found id: "8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d"
	I1202 20:32:15.354466  211223 cri.go:89] found id: "ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb"
	I1202 20:32:15.354483  211223 cri.go:89] found id: "1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1"
	I1202 20:32:15.354501  211223 cri.go:89] found id: "749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	I1202 20:32:15.354532  211223 cri.go:89] found id: "155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975"
	I1202 20:32:15.354558  211223 cri.go:89] found id: "2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29"
	I1202 20:32:15.354577  211223 cri.go:89] found id: "f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	I1202 20:32:15.354595  211223 cri.go:89] found id: "88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7"
	I1202 20:32:15.354634  211223 cri.go:89] found id: "d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc"
	I1202 20:32:15.354656  211223 cri.go:89] found id: "12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c"
	I1202 20:32:15.354674  211223 cri.go:89] found id: ""
	I1202 20:32:15.354753  211223 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:32:15.371329  211223 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:32:15.371473  211223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:32:15.386823  211223 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:32:15.386890  211223 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:32:15.386976  211223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:32:15.399052  211223 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:32:15.399790  211223 kubeconfig.go:125] found "pause-774682" server: "https://192.168.85.2:8443"
	I1202 20:32:15.400713  211223 kapi.go:59] client config for pause-774682: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:32:15.401470  211223 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 20:32:15.401565  211223 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 20:32:15.401601  211223 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 20:32:15.401624  211223 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 20:32:15.401641  211223 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 20:32:15.402031  211223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:32:15.414186  211223 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:32:15.414270  211223 kubeadm.go:602] duration metric: took 27.360522ms to restartPrimaryControlPlane
	I1202 20:32:15.414294  211223 kubeadm.go:403] duration metric: took 105.24654ms to StartCluster
	I1202 20:32:15.414354  211223 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:15.414445  211223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:32:15.415402  211223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:15.415680  211223 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:32:15.416070  211223 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:32:15.416499  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:15.421942  211223 out.go:179] * Enabled addons: 
	I1202 20:32:15.422042  211223 out.go:179] * Verifying Kubernetes components...
	I1202 20:32:15.424743  211223 addons.go:530] duration metric: took 8.674774ms for enable addons: enabled=[]
	I1202 20:32:15.424856  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:15.698590  211223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:32:15.720217  211223 node_ready.go:35] waiting up to 6m0s for node "pause-774682" to be "Ready" ...
	I1202 20:32:20.288357  211223 node_ready.go:49] node "pause-774682" is "Ready"
	I1202 20:32:20.288382  211223 node_ready.go:38] duration metric: took 4.56813541s for node "pause-774682" to be "Ready" ...
	I1202 20:32:20.288397  211223 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:32:20.288455  211223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:32:20.309538  211223 api_server.go:72] duration metric: took 4.893792803s to wait for apiserver process to appear ...
	I1202 20:32:20.309560  211223 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:32:20.309578  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:20.377022  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:32:20.377114  211223 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:32:20.809698  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:20.819268  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:32:20.819304  211223 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:32:23.503840  181375 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00103374s
	I1202 20:32:23.503874  181375 kubeadm.go:319] 
	I1202 20:32:23.503956  181375 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 20:32:23.504007  181375 kubeadm.go:319] 	- The kubelet is not running
	I1202 20:32:23.504125  181375 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 20:32:23.504133  181375 kubeadm.go:319] 
	I1202 20:32:23.504237  181375 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 20:32:23.504278  181375 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 20:32:23.504310  181375 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 20:32:23.504314  181375 kubeadm.go:319] 
	I1202 20:32:23.508552  181375 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:32:23.509049  181375 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 20:32:23.509182  181375 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:32:23.509456  181375 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 20:32:23.509466  181375 kubeadm.go:319] 
	I1202 20:32:23.509543  181375 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 20:32:23.509696  181375 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00103374s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 20:32:23.509800  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 20:32:23.926878  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:23.941936  181375 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:32:23.941999  181375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:32:23.956231  181375 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:32:23.956251  181375 kubeadm.go:158] found existing configuration files:
	
	I1202 20:32:23.956312  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:32:23.965353  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:32:23.965424  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:32:23.974074  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:32:23.983217  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:32:23.983330  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:32:23.991422  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.000663  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:32:24.000765  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.011295  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:32:24.021043  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:32:24.021186  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:32:24.035281  181375 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:32:24.091828  181375 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:32:24.092279  181375 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:32:24.177998  181375 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:32:24.178147  181375 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 20:32:24.178212  181375 kubeadm.go:319] OS: Linux
	I1202 20:32:24.178291  181375 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:32:24.178370  181375 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 20:32:24.178450  181375 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:32:24.178518  181375 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:32:24.178610  181375 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:32:24.178682  181375 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:32:24.178775  181375 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:32:24.178854  181375 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:32:24.178913  181375 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 20:32:24.251731  181375 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:32:24.251848  181375 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:32:24.251946  181375 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:32:24.267154  181375 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:32:24.270532  181375 out.go:252]   - Generating certificates and keys ...
	I1202 20:32:24.270644  181375 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:32:24.270773  181375 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:32:24.270888  181375 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 20:32:24.270971  181375 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 20:32:24.271058  181375 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 20:32:24.271152  181375 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 20:32:24.271232  181375 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 20:32:24.271315  181375 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 20:32:24.271403  181375 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 20:32:24.271485  181375 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 20:32:24.271560  181375 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 20:32:24.271633  181375 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:32:24.364613  181375 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:32:24.725639  181375 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:32:24.885871  181375 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:32:25.393339  181375 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:32:25.623587  181375 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:32:25.624397  181375 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:32:25.627107  181375 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:32:21.309736  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:21.319165  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 20:32:21.320933  211223 api_server.go:141] control plane version: v1.34.2
	I1202 20:32:21.320964  211223 api_server.go:131] duration metric: took 1.011396967s to wait for apiserver health ...
	I1202 20:32:21.320974  211223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:32:21.325141  211223 system_pods.go:59] 7 kube-system pods found
	I1202 20:32:21.325187  211223 system_pods.go:61] "coredns-66bc5c9577-k2d8x" [a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:32:21.325196  211223 system_pods.go:61] "etcd-pause-774682" [7a48cfbe-199b-4f69-9a61-6381b804ab50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:32:21.325201  211223 system_pods.go:61] "kindnet-hh7zt" [2b9582da-2b2c-4243-baf2-b681960b8809] Running
	I1202 20:32:21.325208  211223 system_pods.go:61] "kube-apiserver-pause-774682" [096b9a71-1403-42a3-94f0-b8f03a7d003a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:32:21.325214  211223 system_pods.go:61] "kube-controller-manager-pause-774682" [adbd4fa7-8442-42ad-a082-4302656091ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:32:21.325218  211223 system_pods.go:61] "kube-proxy-6m8f8" [fccfb9dc-b054-469a-8dc0-1fa4c56ec683] Running
	I1202 20:32:21.325225  211223 system_pods.go:61] "kube-scheduler-pause-774682" [72a1fbfe-a916-4f15-80ca-74452cdf74c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:32:21.325231  211223 system_pods.go:74] duration metric: took 4.250801ms to wait for pod list to return data ...
	I1202 20:32:21.325243  211223 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:32:21.327655  211223 default_sa.go:45] found service account: "default"
	I1202 20:32:21.327680  211223 default_sa.go:55] duration metric: took 2.429741ms for default service account to be created ...
	I1202 20:32:21.327689  211223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:32:21.331752  211223 system_pods.go:86] 7 kube-system pods found
	I1202 20:32:21.331791  211223 system_pods.go:89] "coredns-66bc5c9577-k2d8x" [a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:32:21.331801  211223 system_pods.go:89] "etcd-pause-774682" [7a48cfbe-199b-4f69-9a61-6381b804ab50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:32:21.331807  211223 system_pods.go:89] "kindnet-hh7zt" [2b9582da-2b2c-4243-baf2-b681960b8809] Running
	I1202 20:32:21.331814  211223 system_pods.go:89] "kube-apiserver-pause-774682" [096b9a71-1403-42a3-94f0-b8f03a7d003a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:32:21.331822  211223 system_pods.go:89] "kube-controller-manager-pause-774682" [adbd4fa7-8442-42ad-a082-4302656091ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:32:21.331827  211223 system_pods.go:89] "kube-proxy-6m8f8" [fccfb9dc-b054-469a-8dc0-1fa4c56ec683] Running
	I1202 20:32:21.331835  211223 system_pods.go:89] "kube-scheduler-pause-774682" [72a1fbfe-a916-4f15-80ca-74452cdf74c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:32:21.331841  211223 system_pods.go:126] duration metric: took 4.147304ms to wait for k8s-apps to be running ...
	I1202 20:32:21.331853  211223 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:32:21.331910  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:21.346836  211223 system_svc.go:56] duration metric: took 14.967084ms WaitForService to wait for kubelet
	I1202 20:32:21.346867  211223 kubeadm.go:587] duration metric: took 5.931126144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:32:21.346887  211223 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:32:21.358366  211223 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 20:32:21.358402  211223 node_conditions.go:123] node cpu capacity is 2
	I1202 20:32:21.358417  211223 node_conditions.go:105] duration metric: took 11.524548ms to run NodePressure ...
	I1202 20:32:21.358430  211223 start.go:242] waiting for startup goroutines ...
	I1202 20:32:21.358438  211223 start.go:247] waiting for cluster config update ...
	I1202 20:32:21.358446  211223 start.go:256] writing updated cluster config ...
	I1202 20:32:21.358736  211223 ssh_runner.go:195] Run: rm -f paused
	I1202 20:32:21.362807  211223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:32:21.363418  211223 kapi.go:59] client config for pause-774682: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:32:21.369963  211223 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2d8x" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:32:23.376507  211223 pod_ready.go:104] pod "coredns-66bc5c9577-k2d8x" is not "Ready", error: <nil>
	W1202 20:32:25.378921  211223 pod_ready.go:104] pod "coredns-66bc5c9577-k2d8x" is not "Ready", error: <nil>
	I1202 20:32:25.630436  181375 out.go:252]   - Booting up control plane ...
	I1202 20:32:25.630545  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:32:25.630624  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:32:25.631009  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:32:25.647838  181375 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:32:25.648215  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:32:25.656245  181375 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:32:25.656744  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:32:25.656812  181375 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:32:25.794935  181375 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:32:25.795058  181375 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:32:27.376173  211223 pod_ready.go:94] pod "coredns-66bc5c9577-k2d8x" is "Ready"
	I1202 20:32:27.376203  211223 pod_ready.go:86] duration metric: took 6.006215347s for pod "coredns-66bc5c9577-k2d8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.378665  211223 pod_ready.go:83] waiting for pod "etcd-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.383193  211223 pod_ready.go:94] pod "etcd-pause-774682" is "Ready"
	I1202 20:32:27.383219  211223 pod_ready.go:86] duration metric: took 4.526994ms for pod "etcd-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.385812  211223 pod_ready.go:83] waiting for pod "kube-apiserver-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:32:29.391378  211223 pod_ready.go:104] pod "kube-apiserver-pause-774682" is not "Ready", error: <nil>
	W1202 20:32:31.404044  211223 pod_ready.go:104] pod "kube-apiserver-pause-774682" is not "Ready", error: <nil>
	I1202 20:32:32.391209  211223 pod_ready.go:94] pod "kube-apiserver-pause-774682" is "Ready"
	I1202 20:32:32.391240  211223 pod_ready.go:86] duration metric: took 5.005399706s for pod "kube-apiserver-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.393612  211223 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.398098  211223 pod_ready.go:94] pod "kube-controller-manager-pause-774682" is "Ready"
	I1202 20:32:32.398128  211223 pod_ready.go:86] duration metric: took 4.485969ms for pod "kube-controller-manager-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.402642  211223 pod_ready.go:83] waiting for pod "kube-proxy-6m8f8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.407264  211223 pod_ready.go:94] pod "kube-proxy-6m8f8" is "Ready"
	I1202 20:32:32.407288  211223 pod_ready.go:86] duration metric: took 4.620202ms for pod "kube-proxy-6m8f8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.409273  211223 pod_ready.go:83] waiting for pod "kube-scheduler-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.774194  211223 pod_ready.go:94] pod "kube-scheduler-pause-774682" is "Ready"
	I1202 20:32:32.774225  211223 pod_ready.go:86] duration metric: took 364.925252ms for pod "kube-scheduler-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.774246  211223 pod_ready.go:40] duration metric: took 11.411402047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:32:32.829594  211223 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 20:32:32.832656  211223 out.go:179] * Done! kubectl is now configured to use "pause-774682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.090390408Z" level=info msg="Created container 720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b: kube-system/etcd-pause-774682/etcd" id=e326ddec-92ef-4dbe-89ec-69c3a6e2cdf3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.091795505Z" level=info msg="Starting container: 40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45" id=4afb54e6-5717-4433-9326-523479f691a0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.094144607Z" level=info msg="Created container 0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1: kube-system/kindnet-hh7zt/kindnet-cni" id=b99e806f-b392-4c5e-9ebe-9e5322874fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.097199692Z" level=info msg="Starting container: 0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1" id=bfc19f32-82b6-4d79-8e63-b64de66c25b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.100670372Z" level=info msg="Starting container: 720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b" id=b7fd2ae1-1400-4570-99ff-08c68d6e4e8e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.113474448Z" level=info msg="Started container" PID=2310 containerID=8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d description=kube-system/kube-proxy-6m8f8/kube-proxy id=89447778-2c4d-4bc5-be7b-44c355ab9f34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e816f1745372d94586830dcdf54578908685a0ca37d59e354c953001216545c6
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.113838746Z" level=info msg="Started container" PID=2304 containerID=ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb description=kube-system/kube-controller-manager-pause-774682/kube-controller-manager id=404b3ecf-73a5-4c8e-a4c3-83e7b1ab7ee6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa41a923be1ee90eb3f32d9bed72be02f58fd74b5843a77d64dcd84bc0e10a5f
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.125783195Z" level=info msg="Started container" PID=2326 containerID=720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b description=kube-system/etcd-pause-774682/etcd id=b7fd2ae1-1400-4570-99ff-08c68d6e4e8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a14e3d0708a757a480a2b774f5181bb68ef495132af8f19cf61d581fa3875f5
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.134114635Z" level=info msg="Started container" PID=2320 containerID=40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45 description=kube-system/coredns-66bc5c9577-k2d8x/coredns id=4afb54e6-5717-4433-9326-523479f691a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e57cae3f7f4cfc83263f52f7c20c7df3e76d188fb455c68ce31ad1364481429a
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.138152396Z" level=info msg="Started container" PID=2321 containerID=0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1 description=kube-system/kindnet-hh7zt/kindnet-cni id=bfc19f32-82b6-4d79-8e63-b64de66c25b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7240ccfc89b580c0c016b6fae574541b10b582e25c52c02f2eec3bc0f399695e
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.186808832Z" level=info msg="Created container 18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6: kube-system/kube-scheduler-pause-774682/kube-scheduler" id=abb93f0f-b6ff-4d44-b969-d6bfeec93d37 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.187680758Z" level=info msg="Starting container: 18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6" id=a1c0d65e-daf6-4a43-9ca9-e85aff16bfa4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.190381449Z" level=info msg="Started container" PID=2338 containerID=18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6 description=kube-system/kube-scheduler-pause-774682/kube-scheduler id=a1c0d65e-daf6-4a43-9ca9-e85aff16bfa4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e3b57b1a1de3298870ff109736f72e217d14a73e8aaa0887306a6dad9f76348
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.591312274Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.598355734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.59850869Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.598603021Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.609760942Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.60992641Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.610007244Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613367248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613551251Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613703025Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.617762873Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.617929925Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	18e2a378f13b2       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   20 seconds ago       Running             kube-scheduler            1                   6e3b57b1a1de3       kube-scheduler-pause-774682            kube-system
	0efee45e9768d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   7240ccfc89b58       kindnet-hh7zt                          kube-system
	720b2cec41b0e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   20 seconds ago       Running             etcd                      1                   9a14e3d0708a7       etcd-pause-774682                      kube-system
	40517c60f4a8d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   e57cae3f7f4cf       coredns-66bc5c9577-k2d8x               kube-system
	8d6248562e377       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   20 seconds ago       Running             kube-proxy                1                   e816f1745372d       kube-proxy-6m8f8                       kube-system
	ffa7d32b08871       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   20 seconds ago       Running             kube-controller-manager   1                   aa41a923be1ee       kube-controller-manager-pause-774682   kube-system
	1a512904d1971       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   20 seconds ago       Running             kube-apiserver            1                   54c8cb46c371a       kube-apiserver-pause-774682            kube-system
	749b87a7b151c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago       Exited              coredns                   0                   e57cae3f7f4cf       coredns-66bc5c9577-k2d8x               kube-system
	155b4f24673bf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   e816f1745372d       kube-proxy-6m8f8                       kube-system
	2951d99fb4dcb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7240ccfc89b58       kindnet-hh7zt                          kube-system
	f224e46bf3d29       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   6e3b57b1a1de3       kube-scheduler-pause-774682            kube-system
	88c92b704d85b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   54c8cb46c371a       kube-apiserver-pause-774682            kube-system
	d72bb0bdd409c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   aa41a923be1ee       kube-controller-manager-pause-774682   kube-system
	12b210cc547fe       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   9a14e3d0708a7       etcd-pause-774682                      kube-system
	
	
	==> coredns [40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60627 - 24717 "HINFO IN 6612679793645246457.7199248238648309224. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015094777s
	
	
	==> coredns [749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46028 - 40664 "HINFO IN 6615269274718692194.1592327369996744137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019420029s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-774682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-774682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=pause-774682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_31_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:31:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-774682
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-774682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                27e8817a-e96b-40ca-bc7f-268161b8b480
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-k2d8x                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     73s
	  kube-system                 etcd-pause-774682                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         78s
	  kube-system                 kindnet-hh7zt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-774682             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-774682    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-6m8f8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-774682             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 72s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-774682 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-774682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-774682 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           74s   node-controller  Node pause-774682 event: Registered Node pause-774682 in Controller
	  Normal   NodeReady                32s   kubelet          Node pause-774682 status is now: NodeReady
	  Normal   RegisteredNode           13s   node-controller  Node pause-774682 event: Registered Node pause-774682 in Controller
	
	
	==> dmesg <==
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:55] overlayfs: idmapped layers are currently not supported
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:02] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:04] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:09] overlayfs: idmapped layers are currently not supported
	[ +31.785180] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:13] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:14] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c] <==
	{"level":"warn","ts":"2025-12-02T20:31:13.182336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.195127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.219569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.248443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.262978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.282118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.404103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35288","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:32:07.150006Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T20:32:07.150049Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-774682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-02T20:32:07.150480Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:32:07.150564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:32:07.292457Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.292589Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292423Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292743Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:32:07.292780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292855Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292961Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:32:07.293005Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.292877Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T20:32:07.292887Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T20:32:07.296657Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-02T20:32:07.296847Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.296967Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-02T20:32:07.297047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-774682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b] <==
	{"level":"warn","ts":"2025-12-02T20:32:18.443389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.488833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.507860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.550196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.618783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.648286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.679597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.712496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.742967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.769940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.797444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.844420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.881965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.905409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.944001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.965746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.014893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.056216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.075357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.094133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.116086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.146226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.161409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.212743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.242667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:32:35 up  2:14,  0 user,  load average: 1.60, 1.66, 1.72
	Linux pause-774682 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1] <==
	I1202 20:32:15.313439       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:32:15.313951       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:32:15.314152       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:32:15.314197       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:32:15.314257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:32:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:32:15.590539       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:32:15.590626       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:32:15.595857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:32:15.604130       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:32:20.432957       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:32:20.433080       1 metrics.go:72] Registering metrics
	I1202 20:32:20.433185       1 controller.go:711] "Syncing nftables rules"
	I1202 20:32:25.590810       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:25.590954       1 main.go:301] handling current node
	I1202 20:32:35.590877       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:35.590929       1 main.go:301] handling current node
	
	
	==> kindnet [2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29] <==
	I1202 20:31:22.604353       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:31:22.690602       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:31:22.690740       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:31:22.690760       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:31:22.690776       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:31:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:31:22.891184       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:31:22.891254       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:31:22.891289       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:31:22.891615       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 20:31:52.891946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 20:31:52.891966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 20:31:52.892063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 20:31:52.892153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1202 20:31:54.491992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:31:54.492096       1 metrics.go:72] Registering metrics
	I1202 20:31:54.492182       1 controller.go:711] "Syncing nftables rules"
	I1202 20:32:02.897723       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:02.897780       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1] <==
	I1202 20:32:20.332278       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 20:32:20.336196       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:32:20.336284       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 20:32:20.336355       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:32:20.336395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:32:20.337261       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:32:20.337327       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:32:20.344447       1 aggregator.go:171] initial CRD sync complete...
	I1202 20:32:20.344477       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:32:20.344485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:32:20.344492       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:32:20.344747       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:32:20.344795       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:32:20.357185       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:32:20.383794       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:32:20.390754       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 20:32:20.390794       1 policy_source.go:240] refreshing policies
	E1202 20:32:20.450197       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:32:20.464313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:32:21.012112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:32:21.335659       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:32:22.709035       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:32:22.951606       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:32:23.004258       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:32:23.105124       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7] <==
	W1202 20:32:07.161304       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161366       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161415       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161467       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161518       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161565       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161613       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161928       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.168463       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169916       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170027       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170166       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169384       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169413       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169440       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169463       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169491       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169517       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169543       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169567       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169593       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169620       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169673       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170930       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.171035       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc] <==
	I1202 20:31:21.203783       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:31:21.205227       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:31:21.206426       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:31:21.211842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:31:21.217010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:31:21.217035       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:31:21.217043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:31:21.232374       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:31:21.233021       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:31:21.241783       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:31:21.248242       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:31:21.249470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:31:21.249839       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 20:31:21.249867       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:31:21.249884       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:31:21.249918       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:31:21.252183       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:31:21.252639       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 20:31:21.253018       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:31:21.253412       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:31:21.253452       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 20:31:21.253575       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:31:21.253601       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:31:21.253616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:32:06.207297       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb] <==
	I1202 20:32:22.695249       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:32:22.696982       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:32:22.697050       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:32:22.700709       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:32:22.700809       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:32:22.704698       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:32:22.704797       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:32:22.708183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:32:22.716729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:32:22.720065       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:32:22.722547       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:32:22.724725       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:32:22.724747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:32:22.724754       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:32:22.728065       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:32:22.731672       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 20:32:22.733934       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:32:22.744939       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:32:22.744990       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:32:22.745012       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 20:32:22.745476       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:32:22.745027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:32:22.745885       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:32:22.745915       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:32:22.769436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975] <==
	I1202 20:31:22.622961       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:31:22.703770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:31:22.810020       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:31:22.825979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:31:22.846001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:31:22.888727       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:31:22.888847       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:31:22.898624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:31:22.899220       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:31:22.899605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:31:22.901541       1 config.go:200] "Starting service config controller"
	I1202 20:31:22.901604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:31:22.901794       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:31:22.913624       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:31:22.913825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:31:22.902142       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:31:22.913871       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:31:22.913877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:31:22.901823       1 config.go:309] "Starting node config controller"
	I1202 20:31:22.913901       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:31:22.913917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:31:23.001869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d] <==
	I1202 20:32:19.982146       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:32:20.192629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:32:20.493959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:32:20.494004       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:32:20.494068       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:32:20.540812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:32:20.540884       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:32:20.548317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:32:20.548628       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:32:20.548650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:32:20.551090       1 config.go:200] "Starting service config controller"
	I1202 20:32:20.554204       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:32:20.555675       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:32:20.555720       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:32:20.555742       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:32:20.555747       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:32:20.557142       1 config.go:309] "Starting node config controller"
	I1202 20:32:20.557164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:32:20.557171       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:32:20.656301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:32:20.656345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:32:20.656380       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6] <==
	I1202 20:32:18.825412       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:32:20.244889       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:32:20.244977       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:32:20.245012       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:32:20.245044       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:32:20.348420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:32:20.351879       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:32:20.354838       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:32:20.355483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:20.370112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:20.355507       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:32:20.470286       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3] <==
	E1202 20:31:14.864433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:31:14.864513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:31:14.864575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:31:14.864926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:31:14.865141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:31:14.865212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:31:14.865259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:31:14.865299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:31:14.865340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:31:14.865409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:31:14.865450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:31:14.870176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 20:31:14.870178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:31:14.870282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:31:14.870329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:31:14.870384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:31:14.869907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:31:15.720884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 20:31:18.551842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:07.159056       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 20:32:07.159083       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 20:32:07.159103       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 20:32:07.159136       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:07.159290       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 20:32:07.159305       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907072    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907388    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907756    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.908047    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.908336    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: I1202 20:32:14.918256    1312 scope.go:117] "RemoveContainer" containerID="749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.918851    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919060    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919307    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919539    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-k2d8x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4" pod="kube-system/coredns-66bc5c9577-k2d8x"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919977    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e719364ba9dec0a70dcd38332b913eb4" pod="kube-system/kube-controller-manager-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.920286    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.920589    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: I1202 20:32:14.943247    1312 scope.go:117] "RemoveContainer" containerID="f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944026    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944281    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944487    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944668    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944879    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-k2d8x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4" pod="kube-system/coredns-66bc5c9577-k2d8x"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.945073    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e719364ba9dec0a70dcd38332b913eb4" pod="kube-system/kube-controller-manager-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.945260    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:26 pause-774682 kubelet[1312]: W1202 20:32:26.978177    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 02 20:32:33 pause-774682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:32:33 pause-774682 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:32:33 pause-774682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-774682 -n pause-774682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-774682 -n pause-774682: exit status 2 (344.781937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-774682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-774682
helpers_test.go:243: (dbg) docker inspect pause-774682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57",
	        "Created": "2025-12-02T20:30:51.439495125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T20:30:51.526557467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/hostname",
	        "HostsPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/hosts",
	        "LogPath": "/var/lib/docker/containers/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57/2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57-json.log",
	        "Name": "/pause-774682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-774682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-774682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2108796dc76ec8d6aced8d766563f08957824f349cf9e806eb215b52c5b7ec57",
	                "LowerDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d-init/diff:/var/lib/docker/overlay2/772a6e06064ffdf44f714ef89bf902f5c1708f0c895d610baf2f1063f7a35032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4cfb059e539915b74860866fc504ff998021568771b858d4e68bf6bf290c08d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-774682",
	                "Source": "/var/lib/docker/volumes/pause-774682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-774682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-774682",
	                "name.minikube.sigs.k8s.io": "pause-774682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d22a15167e9a4f3134d1ec0b3d734cd54b8e290c10c12cc6f1cd11640552245d",
	            "SandboxKey": "/var/run/docker/netns/d22a15167e9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-774682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:67:3c:c4:c7:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dce15c848469907113b6d5a204240ebd31ad2ddccf9a79c0dd47371856ca1472",
	                    "EndpointID": "4e4db4557cc85219dd93723fce0dc7082272023484878f51365b14eea906a7d9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-774682",
	                        "2108796dc76e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-774682 -n pause-774682
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-774682 -n pause-774682: exit status 2 (364.690305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-774682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-774682 logs -n 25: (1.347356711s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-778048 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ start   │ -p missing-upgrade-210819 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-210819    │ jenkins │ v1.35.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ delete  │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:22 UTC │
	│ start   │ -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:22 UTC │ 02 Dec 25 20:23 UTC │
	│ ssh     │ -p NoKubernetes-778048 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ stop    │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p NoKubernetes-778048 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ ssh     │ -p NoKubernetes-778048 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ delete  │ -p NoKubernetes-778048                                                                                                                          │ NoKubernetes-778048       │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p missing-upgrade-210819 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-210819    │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:24 UTC │
	│ stop    │ -p kubernetes-upgrade-080046                                                                                                                    │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │ 02 Dec 25 20:23 UTC │
	│ start   │ -p kubernetes-upgrade-080046 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-080046 │ jenkins │ v1.37.0 │ 02 Dec 25 20:23 UTC │                     │
	│ delete  │ -p missing-upgrade-210819                                                                                                                       │ missing-upgrade-210819    │ jenkins │ v1.37.0 │ 02 Dec 25 20:24 UTC │ 02 Dec 25 20:24 UTC │
	│ start   │ -p stopped-upgrade-085945 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-085945    │ jenkins │ v1.35.0 │ 02 Dec 25 20:24 UTC │ 02 Dec 25 20:25 UTC │
	│ stop    │ stopped-upgrade-085945 stop                                                                                                                     │ stopped-upgrade-085945    │ jenkins │ v1.35.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:25 UTC │
	│ start   │ -p stopped-upgrade-085945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-085945    │ jenkins │ v1.37.0 │ 02 Dec 25 20:25 UTC │ 02 Dec 25 20:29 UTC │
	│ delete  │ -p stopped-upgrade-085945                                                                                                                       │ stopped-upgrade-085945    │ jenkins │ v1.37.0 │ 02 Dec 25 20:29 UTC │ 02 Dec 25 20:29 UTC │
	│ start   │ -p running-upgrade-568729 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-568729    │ jenkins │ v1.35.0 │ 02 Dec 25 20:29 UTC │ 02 Dec 25 20:30 UTC │
	│ start   │ -p running-upgrade-568729 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-568729    │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:30 UTC │
	│ delete  │ -p running-upgrade-568729                                                                                                                       │ running-upgrade-568729    │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:30 UTC │
	│ start   │ -p pause-774682 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:32 UTC │
	│ start   │ -p pause-774682 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │ 02 Dec 25 20:32 UTC │
	│ pause   │ -p pause-774682 --alsologtostderr -v=5                                                                                                          │ pause-774682              │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:32:05.866145  211223 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:32:05.866676  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:05.866711  211223 out.go:374] Setting ErrFile to fd 2...
	I1202 20:32:05.866733  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:32:05.867033  211223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:32:05.867450  211223 out.go:368] Setting JSON to false
	I1202 20:32:05.868463  211223 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8064,"bootTime":1764699462,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:32:05.868560  211223 start.go:143] virtualization:  
	I1202 20:32:05.873810  211223 out.go:179] * [pause-774682] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:32:05.877128  211223 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 20:32:05.877215  211223 notify.go:221] Checking for updates...
	I1202 20:32:05.880883  211223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:32:05.883894  211223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:32:05.886828  211223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 20:32:05.889691  211223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 20:32:05.892605  211223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:32:05.895949  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:05.896524  211223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:32:05.925781  211223 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:32:05.925897  211223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:32:05.993965  211223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:32:05.984225051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:32:05.994073  211223 docker.go:319] overlay module found
	I1202 20:32:05.997225  211223 out.go:179] * Using the docker driver based on existing profile
	I1202 20:32:06.000111  211223 start.go:309] selected driver: docker
	I1202 20:32:06.000136  211223 start.go:927] validating driver "docker" against &{Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:06.000272  211223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:32:06.000375  211223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:32:06.061731  211223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:32:06.052662654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:32:06.062155  211223 cni.go:84] Creating CNI manager for ""
	I1202 20:32:06.062234  211223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:32:06.062282  211223 start.go:353] cluster config:
	{Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:06.065595  211223 out.go:179] * Starting "pause-774682" primary control-plane node in "pause-774682" cluster
	I1202 20:32:06.068447  211223 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 20:32:06.071585  211223 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:32:06.074323  211223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:32:06.074370  211223 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 20:32:06.074380  211223 cache.go:65] Caching tarball of preloaded images
	I1202 20:32:06.074410  211223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:32:06.074502  211223 preload.go:238] Found /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1202 20:32:06.074512  211223 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:32:06.074643  211223 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/config.json ...
	I1202 20:32:06.099450  211223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 20:32:06.099471  211223 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 20:32:06.099487  211223 cache.go:243] Successfully downloaded all kic artifacts
	I1202 20:32:06.099521  211223 start.go:360] acquireMachinesLock for pause-774682: {Name:mk542181bd319b24dbfd31147451cd023cc98a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:32:06.099579  211223 start.go:364] duration metric: took 36.339µs to acquireMachinesLock for "pause-774682"
	I1202 20:32:06.099615  211223 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:32:06.099622  211223 fix.go:54] fixHost starting: 
	I1202 20:32:06.099877  211223 cli_runner.go:164] Run: docker container inspect pause-774682 --format={{.State.Status}}
	I1202 20:32:06.121602  211223 fix.go:112] recreateIfNeeded on pause-774682: state=Running err=<nil>
	W1202 20:32:06.121635  211223 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:32:06.124883  211223 out.go:252] * Updating the running docker "pause-774682" container ...
	I1202 20:32:06.124923  211223 machine.go:94] provisionDockerMachine start ...
	I1202 20:32:06.125025  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.143440  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.143772  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.143788  211223 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:32:06.297495  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-774682
	
	I1202 20:32:06.297522  211223 ubuntu.go:182] provisioning hostname "pause-774682"
	I1202 20:32:06.297584  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.315372  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.315688  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.315703  211223 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-774682 && echo "pause-774682" | sudo tee /etc/hostname
	I1202 20:32:06.481284  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-774682
	
	I1202 20:32:06.481391  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.499766  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.500093  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.500118  211223 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-774682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-774682/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-774682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:32:06.653832  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:32:06.653857  211223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2526/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2526/.minikube}
	I1202 20:32:06.653877  211223 ubuntu.go:190] setting up certificates
	I1202 20:32:06.653887  211223 provision.go:84] configureAuth start
	I1202 20:32:06.653969  211223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-774682
	I1202 20:32:06.672988  211223 provision.go:143] copyHostCerts
	I1202 20:32:06.673086  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem, removing ...
	I1202 20:32:06.673106  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem
	I1202 20:32:06.673188  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/ca.pem (1082 bytes)
	I1202 20:32:06.673326  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem, removing ...
	I1202 20:32:06.673345  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem
	I1202 20:32:06.673389  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/cert.pem (1123 bytes)
	I1202 20:32:06.673466  211223 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem, removing ...
	I1202 20:32:06.673480  211223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem
	I1202 20:32:06.673509  211223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2526/.minikube/key.pem (1675 bytes)
	I1202 20:32:06.673587  211223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem org=jenkins.pause-774682 san=[127.0.0.1 192.168.85.2 localhost minikube pause-774682]
	I1202 20:32:06.771681  211223 provision.go:177] copyRemoteCerts
	I1202 20:32:06.771753  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:32:06.771797  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.791984  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:06.897818  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:32:06.916404  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1202 20:32:06.935259  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:32:06.961621  211223 provision.go:87] duration metric: took 307.711118ms to configureAuth
	I1202 20:32:06.961649  211223 ubuntu.go:206] setting minikube options for container-runtime
	I1202 20:32:06.961908  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:06.962018  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:06.979703  211223 main.go:143] libmachine: Using SSH client type: native
	I1202 20:32:06.980020  211223 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1202 20:32:06.980040  211223 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:32:12.364345  211223 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:32:12.364365  211223 machine.go:97] duration metric: took 6.239433951s to provisionDockerMachine
	I1202 20:32:12.364385  211223 start.go:293] postStartSetup for "pause-774682" (driver="docker")
	I1202 20:32:12.364397  211223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:32:12.364484  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:32:12.364525  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.382512  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.485587  211223 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:32:12.489032  211223 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 20:32:12.489062  211223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 20:32:12.489072  211223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/addons for local assets ...
	I1202 20:32:12.489127  211223 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2526/.minikube/files for local assets ...
	I1202 20:32:12.489212  211223 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem -> 44702.pem in /etc/ssl/certs
	I1202 20:32:12.489320  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:32:12.496981  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:32:12.514882  211223 start.go:296] duration metric: took 150.48082ms for postStartSetup
	I1202 20:32:12.514981  211223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:32:12.515051  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.532517  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.634670  211223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 20:32:12.639528  211223 fix.go:56] duration metric: took 6.53989967s for fixHost
	I1202 20:32:12.639557  211223 start.go:83] releasing machines lock for "pause-774682", held for 6.539965752s
	I1202 20:32:12.639621  211223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-774682
	I1202 20:32:12.657042  211223 ssh_runner.go:195] Run: cat /version.json
	I1202 20:32:12.657093  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.657094  211223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:32:12.657148  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-774682
	I1202 20:32:12.672846  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.675648  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/pause-774682/id_rsa Username:docker}
	I1202 20:32:12.773220  211223 ssh_runner.go:195] Run: systemctl --version
	I1202 20:32:12.862430  211223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:32:12.901869  211223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:32:12.906111  211223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:32:12.906203  211223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:32:12.913708  211223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:32:12.913782  211223 start.go:496] detecting cgroup driver to use...
	I1202 20:32:12.913826  211223 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 20:32:12.913898  211223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:32:12.929618  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:32:12.942717  211223 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:32:12.942780  211223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:32:12.958294  211223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:32:12.971170  211223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:32:13.104612  211223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:32:13.240267  211223 docker.go:234] disabling docker service ...
	I1202 20:32:13.240338  211223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:32:13.255382  211223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:32:13.267881  211223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:32:13.423151  211223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:32:13.558016  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:32:13.571676  211223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:32:13.585922  211223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:32:13.585986  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.595079  211223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:32:13.595153  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.604222  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.612852  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.621834  211223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:32:13.630374  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.639155  211223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.647434  211223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:32:13.656446  211223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:32:13.664317  211223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:32:13.673987  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:13.803501  211223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:32:14.025439  211223 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:32:14.025556  211223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:32:14.029397  211223 start.go:564] Will wait 60s for crictl version
	I1202 20:32:14.029461  211223 ssh_runner.go:195] Run: which crictl
	I1202 20:32:14.033162  211223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 20:32:14.065715  211223 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1202 20:32:14.065804  211223 ssh_runner.go:195] Run: crio --version
	I1202 20:32:14.104143  211223 ssh_runner.go:195] Run: crio --version
	I1202 20:32:14.139188  211223 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.2 ...
	I1202 20:32:14.142064  211223 cli_runner.go:164] Run: docker network inspect pause-774682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 20:32:14.157702  211223 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 20:32:14.161566  211223 kubeadm.go:884] updating cluster {Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:32:14.161749  211223 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:32:14.161812  211223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:32:14.199403  211223 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:32:14.199428  211223 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:32:14.199483  211223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:32:14.223705  211223 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:32:14.223729  211223 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:32:14.223737  211223 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1202 20:32:14.223831  211223 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-774682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:32:14.223904  211223 ssh_runner.go:195] Run: crio config
	I1202 20:32:14.280743  211223 cni.go:84] Creating CNI manager for ""
	I1202 20:32:14.280811  211223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 20:32:14.280846  211223 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:32:14.280895  211223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-774682 NodeName:pause-774682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:32:14.281060  211223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-774682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:32:14.281162  211223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:32:14.288506  211223 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:32:14.288609  211223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:32:14.296014  211223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1202 20:32:14.308293  211223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:32:14.320627  211223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1202 20:32:14.332719  211223 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 20:32:14.336609  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:14.461121  211223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:32:14.474823  211223 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682 for IP: 192.168.85.2
	I1202 20:32:14.474845  211223 certs.go:195] generating shared ca certs ...
	I1202 20:32:14.474863  211223 certs.go:227] acquiring lock for ca certs: {Name:mk84565ca67e5ce6f3d2b2c5c2b09a77f82d08d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:14.474993  211223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key
	I1202 20:32:14.475041  211223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key
	I1202 20:32:14.475053  211223 certs.go:257] generating profile certs ...
	I1202 20:32:14.475137  211223 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key
	I1202 20:32:14.475207  211223 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.key.ed7bff59
	I1202 20:32:14.475286  211223 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.key
	I1202 20:32:14.475403  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem (1338 bytes)
	W1202 20:32:14.475440  211223 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470_empty.pem, impossibly tiny 0 bytes
	I1202 20:32:14.475452  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:32:14.475478  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:32:14.475510  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:32:14.475541  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/certs/key.pem (1675 bytes)
	I1202 20:32:14.475592  211223 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem (1708 bytes)
	I1202 20:32:14.476218  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:32:14.494553  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:32:14.512228  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:32:14.529335  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 20:32:14.546413  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:32:14.563951  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:32:14.581817  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:32:14.599479  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:32:14.617200  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/ssl/certs/44702.pem --> /usr/share/ca-certificates/44702.pem (1708 bytes)
	I1202 20:32:14.634271  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:32:14.650783  211223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2526/.minikube/certs/4470.pem --> /usr/share/ca-certificates/4470.pem (1338 bytes)
	I1202 20:32:14.667481  211223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:32:14.679372  211223 ssh_runner.go:195] Run: openssl version
	I1202 20:32:14.685334  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4470.pem && ln -fs /usr/share/ca-certificates/4470.pem /etc/ssl/certs/4470.pem"
	I1202 20:32:14.693553  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.697172  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:09 /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.697263  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4470.pem
	I1202 20:32:14.737724  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4470.pem /etc/ssl/certs/51391683.0"
	I1202 20:32:14.745748  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44702.pem && ln -fs /usr/share/ca-certificates/44702.pem /etc/ssl/certs/44702.pem"
	I1202 20:32:14.753801  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.757256  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:09 /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.757316  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44702.pem
	I1202 20:32:14.799211  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:32:14.807113  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:32:14.815072  211223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.819226  211223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 18:49 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.819341  211223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:32:14.861435  211223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:32:14.870633  211223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:32:14.875051  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:32:14.920460  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:32:14.979467  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:32:15.045569  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:32:15.150845  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:32:15.240758  211223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:32:15.309058  211223 kubeadm.go:401] StartCluster: {Name:pause-774682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-774682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:32:15.309245  211223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:32:15.309340  211223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:32:15.354301  211223 cri.go:89] found id: "18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6"
	I1202 20:32:15.354375  211223 cri.go:89] found id: "0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1"
	I1202 20:32:15.354394  211223 cri.go:89] found id: "720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b"
	I1202 20:32:15.354412  211223 cri.go:89] found id: "40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45"
	I1202 20:32:15.354445  211223 cri.go:89] found id: "8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d"
	I1202 20:32:15.354466  211223 cri.go:89] found id: "ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb"
	I1202 20:32:15.354483  211223 cri.go:89] found id: "1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1"
	I1202 20:32:15.354501  211223 cri.go:89] found id: "749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	I1202 20:32:15.354532  211223 cri.go:89] found id: "155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975"
	I1202 20:32:15.354558  211223 cri.go:89] found id: "2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29"
	I1202 20:32:15.354577  211223 cri.go:89] found id: "f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	I1202 20:32:15.354595  211223 cri.go:89] found id: "88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7"
	I1202 20:32:15.354634  211223 cri.go:89] found id: "d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc"
	I1202 20:32:15.354656  211223 cri.go:89] found id: "12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c"
	I1202 20:32:15.354674  211223 cri.go:89] found id: ""
	I1202 20:32:15.354753  211223 ssh_runner.go:195] Run: sudo runc list -f json
	W1202 20:32:15.371329  211223 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	I1202 20:32:15.371473  211223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:32:15.386823  211223 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:32:15.386890  211223 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:32:15.386976  211223 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:32:15.399052  211223 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:32:15.399790  211223 kubeconfig.go:125] found "pause-774682" server: "https://192.168.85.2:8443"
	I1202 20:32:15.400713  211223 kapi.go:59] client config for pause-774682: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:32:15.401470  211223 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 20:32:15.401565  211223 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 20:32:15.401601  211223 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 20:32:15.401624  211223 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 20:32:15.401641  211223 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 20:32:15.402031  211223 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:32:15.414186  211223 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 20:32:15.414270  211223 kubeadm.go:602] duration metric: took 27.360522ms to restartPrimaryControlPlane
	I1202 20:32:15.414294  211223 kubeadm.go:403] duration metric: took 105.24654ms to StartCluster
	I1202 20:32:15.414354  211223 settings.go:142] acquiring lock: {Name:mk69ee55386b02f678d76490ac2126f51427c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:15.414445  211223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 20:32:15.415402  211223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2526/kubeconfig: {Name:mkf861d7862b02c35dcec859117c7de6a4c4e387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:32:15.415680  211223 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:32:15.416070  211223 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:32:15.416499  211223 config.go:182] Loaded profile config "pause-774682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:32:15.421942  211223 out.go:179] * Enabled addons: 
	I1202 20:32:15.422042  211223 out.go:179] * Verifying Kubernetes components...
	I1202 20:32:15.424743  211223 addons.go:530] duration metric: took 8.674774ms for enable addons: enabled=[]
	I1202 20:32:15.424856  211223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:32:15.698590  211223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:32:15.720217  211223 node_ready.go:35] waiting up to 6m0s for node "pause-774682" to be "Ready" ...
	I1202 20:32:20.288357  211223 node_ready.go:49] node "pause-774682" is "Ready"
	I1202 20:32:20.288382  211223 node_ready.go:38] duration metric: took 4.56813541s for node "pause-774682" to be "Ready" ...
	I1202 20:32:20.288397  211223 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:32:20.288455  211223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:32:20.309538  211223 api_server.go:72] duration metric: took 4.893792803s to wait for apiserver process to appear ...
	I1202 20:32:20.309560  211223 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:32:20.309578  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:20.377022  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:32:20.377114  211223 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:32:20.809698  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:20.819268  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:32:20.819304  211223 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:32:23.503840  181375 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00103374s
	I1202 20:32:23.503874  181375 kubeadm.go:319] 
	I1202 20:32:23.503956  181375 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 20:32:23.504007  181375 kubeadm.go:319] 	- The kubelet is not running
	I1202 20:32:23.504125  181375 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 20:32:23.504133  181375 kubeadm.go:319] 
	I1202 20:32:23.504237  181375 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 20:32:23.504278  181375 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 20:32:23.504310  181375 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 20:32:23.504314  181375 kubeadm.go:319] 
	I1202 20:32:23.508552  181375 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 20:32:23.509049  181375 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 20:32:23.509182  181375 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:32:23.509456  181375 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 20:32:23.509466  181375 kubeadm.go:319] 
	I1202 20:32:23.509543  181375 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 20:32:23.509696  181375 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00103374s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 20:32:23.509800  181375 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 20:32:23.926878  181375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:23.941936  181375 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 20:32:23.941999  181375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:32:23.956231  181375 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:32:23.956251  181375 kubeadm.go:158] found existing configuration files:
	
	I1202 20:32:23.956312  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:32:23.965353  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:32:23.965424  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:32:23.974074  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:32:23.983217  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:32:23.983330  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:32:23.991422  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.000663  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:32:24.000765  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:32:24.011295  181375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:32:24.021043  181375 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:32:24.021186  181375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:32:24.035281  181375 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 20:32:24.091828  181375 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 20:32:24.092279  181375 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 20:32:24.177998  181375 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 20:32:24.178147  181375 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 20:32:24.178212  181375 kubeadm.go:319] OS: Linux
	I1202 20:32:24.178291  181375 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 20:32:24.178370  181375 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 20:32:24.178450  181375 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 20:32:24.178518  181375 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 20:32:24.178610  181375 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 20:32:24.178682  181375 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 20:32:24.178775  181375 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 20:32:24.178854  181375 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 20:32:24.178913  181375 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 20:32:24.251731  181375 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 20:32:24.251848  181375 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 20:32:24.251946  181375 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 20:32:24.267154  181375 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 20:32:24.270532  181375 out.go:252]   - Generating certificates and keys ...
	I1202 20:32:24.270644  181375 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 20:32:24.270773  181375 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 20:32:24.270888  181375 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 20:32:24.270971  181375 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 20:32:24.271058  181375 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 20:32:24.271152  181375 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 20:32:24.271232  181375 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 20:32:24.271315  181375 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 20:32:24.271403  181375 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 20:32:24.271485  181375 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 20:32:24.271560  181375 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 20:32:24.271633  181375 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 20:32:24.364613  181375 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 20:32:24.725639  181375 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 20:32:24.885871  181375 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 20:32:25.393339  181375 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 20:32:25.623587  181375 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 20:32:25.624397  181375 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 20:32:25.627107  181375 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 20:32:21.309736  211223 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 20:32:21.319165  211223 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 20:32:21.320933  211223 api_server.go:141] control plane version: v1.34.2
	I1202 20:32:21.320964  211223 api_server.go:131] duration metric: took 1.011396967s to wait for apiserver health ...
	I1202 20:32:21.320974  211223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:32:21.325141  211223 system_pods.go:59] 7 kube-system pods found
	I1202 20:32:21.325187  211223 system_pods.go:61] "coredns-66bc5c9577-k2d8x" [a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:32:21.325196  211223 system_pods.go:61] "etcd-pause-774682" [7a48cfbe-199b-4f69-9a61-6381b804ab50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:32:21.325201  211223 system_pods.go:61] "kindnet-hh7zt" [2b9582da-2b2c-4243-baf2-b681960b8809] Running
	I1202 20:32:21.325208  211223 system_pods.go:61] "kube-apiserver-pause-774682" [096b9a71-1403-42a3-94f0-b8f03a7d003a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:32:21.325214  211223 system_pods.go:61] "kube-controller-manager-pause-774682" [adbd4fa7-8442-42ad-a082-4302656091ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:32:21.325218  211223 system_pods.go:61] "kube-proxy-6m8f8" [fccfb9dc-b054-469a-8dc0-1fa4c56ec683] Running
	I1202 20:32:21.325225  211223 system_pods.go:61] "kube-scheduler-pause-774682" [72a1fbfe-a916-4f15-80ca-74452cdf74c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:32:21.325231  211223 system_pods.go:74] duration metric: took 4.250801ms to wait for pod list to return data ...
	I1202 20:32:21.325243  211223 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:32:21.327655  211223 default_sa.go:45] found service account: "default"
	I1202 20:32:21.327680  211223 default_sa.go:55] duration metric: took 2.429741ms for default service account to be created ...
	I1202 20:32:21.327689  211223 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:32:21.331752  211223 system_pods.go:86] 7 kube-system pods found
	I1202 20:32:21.331791  211223 system_pods.go:89] "coredns-66bc5c9577-k2d8x" [a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:32:21.331801  211223 system_pods.go:89] "etcd-pause-774682" [7a48cfbe-199b-4f69-9a61-6381b804ab50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:32:21.331807  211223 system_pods.go:89] "kindnet-hh7zt" [2b9582da-2b2c-4243-baf2-b681960b8809] Running
	I1202 20:32:21.331814  211223 system_pods.go:89] "kube-apiserver-pause-774682" [096b9a71-1403-42a3-94f0-b8f03a7d003a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:32:21.331822  211223 system_pods.go:89] "kube-controller-manager-pause-774682" [adbd4fa7-8442-42ad-a082-4302656091ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:32:21.331827  211223 system_pods.go:89] "kube-proxy-6m8f8" [fccfb9dc-b054-469a-8dc0-1fa4c56ec683] Running
	I1202 20:32:21.331835  211223 system_pods.go:89] "kube-scheduler-pause-774682" [72a1fbfe-a916-4f15-80ca-74452cdf74c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:32:21.331841  211223 system_pods.go:126] duration metric: took 4.147304ms to wait for k8s-apps to be running ...
	I1202 20:32:21.331853  211223 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:32:21.331910  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:32:21.346836  211223 system_svc.go:56] duration metric: took 14.967084ms WaitForService to wait for kubelet
	I1202 20:32:21.346867  211223 kubeadm.go:587] duration metric: took 5.931126144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:32:21.346887  211223 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:32:21.358366  211223 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 20:32:21.358402  211223 node_conditions.go:123] node cpu capacity is 2
	I1202 20:32:21.358417  211223 node_conditions.go:105] duration metric: took 11.524548ms to run NodePressure ...
	I1202 20:32:21.358430  211223 start.go:242] waiting for startup goroutines ...
	I1202 20:32:21.358438  211223 start.go:247] waiting for cluster config update ...
	I1202 20:32:21.358446  211223 start.go:256] writing updated cluster config ...
	I1202 20:32:21.358736  211223 ssh_runner.go:195] Run: rm -f paused
	I1202 20:32:21.362807  211223 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:32:21.363418  211223 kapi.go:59] client config for pause-774682: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/profiles/pause-774682/client.key", CAFile:"/home/jenkins/minikube-integration/22021-2526/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:32:21.369963  211223 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2d8x" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:32:23.376507  211223 pod_ready.go:104] pod "coredns-66bc5c9577-k2d8x" is not "Ready", error: <nil>
	W1202 20:32:25.378921  211223 pod_ready.go:104] pod "coredns-66bc5c9577-k2d8x" is not "Ready", error: <nil>
	I1202 20:32:25.630436  181375 out.go:252]   - Booting up control plane ...
	I1202 20:32:25.630545  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 20:32:25.630624  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 20:32:25.631009  181375 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 20:32:25.647838  181375 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 20:32:25.648215  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 20:32:25.656245  181375 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 20:32:25.656744  181375 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 20:32:25.656812  181375 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 20:32:25.794935  181375 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 20:32:25.795058  181375 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 20:32:27.376173  211223 pod_ready.go:94] pod "coredns-66bc5c9577-k2d8x" is "Ready"
	I1202 20:32:27.376203  211223 pod_ready.go:86] duration metric: took 6.006215347s for pod "coredns-66bc5c9577-k2d8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.378665  211223 pod_ready.go:83] waiting for pod "etcd-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.383193  211223 pod_ready.go:94] pod "etcd-pause-774682" is "Ready"
	I1202 20:32:27.383219  211223 pod_ready.go:86] duration metric: took 4.526994ms for pod "etcd-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:27.385812  211223 pod_ready.go:83] waiting for pod "kube-apiserver-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:32:29.391378  211223 pod_ready.go:104] pod "kube-apiserver-pause-774682" is not "Ready", error: <nil>
	W1202 20:32:31.404044  211223 pod_ready.go:104] pod "kube-apiserver-pause-774682" is not "Ready", error: <nil>
	I1202 20:32:32.391209  211223 pod_ready.go:94] pod "kube-apiserver-pause-774682" is "Ready"
	I1202 20:32:32.391240  211223 pod_ready.go:86] duration metric: took 5.005399706s for pod "kube-apiserver-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.393612  211223 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.398098  211223 pod_ready.go:94] pod "kube-controller-manager-pause-774682" is "Ready"
	I1202 20:32:32.398128  211223 pod_ready.go:86] duration metric: took 4.485969ms for pod "kube-controller-manager-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.402642  211223 pod_ready.go:83] waiting for pod "kube-proxy-6m8f8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.407264  211223 pod_ready.go:94] pod "kube-proxy-6m8f8" is "Ready"
	I1202 20:32:32.407288  211223 pod_ready.go:86] duration metric: took 4.620202ms for pod "kube-proxy-6m8f8" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.409273  211223 pod_ready.go:83] waiting for pod "kube-scheduler-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.774194  211223 pod_ready.go:94] pod "kube-scheduler-pause-774682" is "Ready"
	I1202 20:32:32.774225  211223 pod_ready.go:86] duration metric: took 364.925252ms for pod "kube-scheduler-pause-774682" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:32:32.774246  211223 pod_ready.go:40] duration metric: took 11.411402047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:32:32.829594  211223 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1202 20:32:32.832656  211223 out.go:179] * Done! kubectl is now configured to use "pause-774682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.090390408Z" level=info msg="Created container 720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b: kube-system/etcd-pause-774682/etcd" id=e326ddec-92ef-4dbe-89ec-69c3a6e2cdf3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.091795505Z" level=info msg="Starting container: 40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45" id=4afb54e6-5717-4433-9326-523479f691a0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.094144607Z" level=info msg="Created container 0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1: kube-system/kindnet-hh7zt/kindnet-cni" id=b99e806f-b392-4c5e-9ebe-9e5322874fa7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.097199692Z" level=info msg="Starting container: 0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1" id=bfc19f32-82b6-4d79-8e63-b64de66c25b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.100670372Z" level=info msg="Starting container: 720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b" id=b7fd2ae1-1400-4570-99ff-08c68d6e4e8e name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.113474448Z" level=info msg="Started container" PID=2310 containerID=8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d description=kube-system/kube-proxy-6m8f8/kube-proxy id=89447778-2c4d-4bc5-be7b-44c355ab9f34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e816f1745372d94586830dcdf54578908685a0ca37d59e354c953001216545c6
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.113838746Z" level=info msg="Started container" PID=2304 containerID=ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb description=kube-system/kube-controller-manager-pause-774682/kube-controller-manager id=404b3ecf-73a5-4c8e-a4c3-83e7b1ab7ee6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa41a923be1ee90eb3f32d9bed72be02f58fd74b5843a77d64dcd84bc0e10a5f
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.125783195Z" level=info msg="Started container" PID=2326 containerID=720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b description=kube-system/etcd-pause-774682/etcd id=b7fd2ae1-1400-4570-99ff-08c68d6e4e8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a14e3d0708a757a480a2b774f5181bb68ef495132af8f19cf61d581fa3875f5
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.134114635Z" level=info msg="Started container" PID=2320 containerID=40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45 description=kube-system/coredns-66bc5c9577-k2d8x/coredns id=4afb54e6-5717-4433-9326-523479f691a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e57cae3f7f4cfc83263f52f7c20c7df3e76d188fb455c68ce31ad1364481429a
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.138152396Z" level=info msg="Started container" PID=2321 containerID=0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1 description=kube-system/kindnet-hh7zt/kindnet-cni id=bfc19f32-82b6-4d79-8e63-b64de66c25b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7240ccfc89b580c0c016b6fae574541b10b582e25c52c02f2eec3bc0f399695e
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.186808832Z" level=info msg="Created container 18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6: kube-system/kube-scheduler-pause-774682/kube-scheduler" id=abb93f0f-b6ff-4d44-b969-d6bfeec93d37 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.187680758Z" level=info msg="Starting container: 18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6" id=a1c0d65e-daf6-4a43-9ca9-e85aff16bfa4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 20:32:15 pause-774682 crio[2072]: time="2025-12-02T20:32:15.190381449Z" level=info msg="Started container" PID=2338 containerID=18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6 description=kube-system/kube-scheduler-pause-774682/kube-scheduler id=a1c0d65e-daf6-4a43-9ca9-e85aff16bfa4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e3b57b1a1de3298870ff109736f72e217d14a73e8aaa0887306a6dad9f76348
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.591312274Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.598355734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.59850869Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.598603021Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.609760942Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.60992641Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.610007244Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613367248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613551251Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.613703025Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.617762873Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 02 20:32:25 pause-774682 crio[2072]: time="2025-12-02T20:32:25.617929925Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	18e2a378f13b2       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   6e3b57b1a1de3       kube-scheduler-pause-774682            kube-system
	0efee45e9768d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   7240ccfc89b58       kindnet-hh7zt                          kube-system
	720b2cec41b0e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   22 seconds ago       Running             etcd                      1                   9a14e3d0708a7       etcd-pause-774682                      kube-system
	40517c60f4a8d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   e57cae3f7f4cf       coredns-66bc5c9577-k2d8x               kube-system
	8d6248562e377       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   22 seconds ago       Running             kube-proxy                1                   e816f1745372d       kube-proxy-6m8f8                       kube-system
	ffa7d32b08871       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   22 seconds ago       Running             kube-controller-manager   1                   aa41a923be1ee       kube-controller-manager-pause-774682   kube-system
	1a512904d1971       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   22 seconds ago       Running             kube-apiserver            1                   54c8cb46c371a       kube-apiserver-pause-774682            kube-system
	749b87a7b151c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   e57cae3f7f4cf       coredns-66bc5c9577-k2d8x               kube-system
	155b4f24673bf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   e816f1745372d       kube-proxy-6m8f8                       kube-system
	2951d99fb4dcb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7240ccfc89b58       kindnet-hh7zt                          kube-system
	f224e46bf3d29       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   6e3b57b1a1de3       kube-scheduler-pause-774682            kube-system
	88c92b704d85b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   54c8cb46c371a       kube-apiserver-pause-774682            kube-system
	d72bb0bdd409c       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   aa41a923be1ee       kube-controller-manager-pause-774682   kube-system
	12b210cc547fe       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   9a14e3d0708a7       etcd-pause-774682                      kube-system
	
	
	==> coredns [40517c60f4a8d0c10f9528212c5a220c5538dc0d466f57b9c249a1de4b2d9e45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60627 - 24717 "HINFO IN 6612679793645246457.7199248238648309224. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015094777s
	
	
	==> coredns [749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46028 - 40664 "HINFO IN 6615269274718692194.1592327369996744137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019420029s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-774682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-774682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=pause-774682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_31_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:31:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-774682
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:31:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:32:18 +0000   Tue, 02 Dec 2025 20:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-774682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                27e8817a-e96b-40ca-bc7f-268161b8b480
	  Boot ID:                    6f263786-4b2a-4372-aee6-4673ff0a1edf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-k2d8x                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-774682                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-hh7zt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-774682             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-774682    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-6m8f8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-774682             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 82s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s   kubelet          Node pause-774682 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s   kubelet          Node pause-774682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s   kubelet          Node pause-774682 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-774682 event: Registered Node pause-774682 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-774682 status is now: NodeReady
	  Normal   RegisteredNode           16s   node-controller  Node pause-774682 event: Registered Node pause-774682 in Controller
	
	
	==> dmesg <==
	[Dec 2 19:47] overlayfs: idmapped layers are currently not supported
	[ +24.868591] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:55] overlayfs: idmapped layers are currently not supported
	[  +3.715582] overlayfs: idmapped layers are currently not supported
	[Dec 2 19:58] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:02] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:04] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:09] overlayfs: idmapped layers are currently not supported
	[ +31.785180] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:10] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:12] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:13] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:14] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [12b210cc547fecac184a95362f3fc71a3557f11c3f96e014c6090ff680e2c37c] <==
	{"level":"warn","ts":"2025-12-02T20:31:13.182336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.195127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.219569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.248443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.262978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.282118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:31:13.404103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35288","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T20:32:07.150006Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T20:32:07.150049Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-774682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-02T20:32:07.150480Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:32:07.150564Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T20:32:07.292457Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.292589Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292423Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292743Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:32:07.292780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292855Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T20:32:07.292961Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T20:32:07.293005Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.292877Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-02T20:32:07.292887Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T20:32:07.296657Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-02T20:32:07.296847Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T20:32:07.296967Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-02T20:32:07.297047Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-774682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [720b2cec41b0e289b1e17fcbf8dfda0ac726fa26efd2452e64246a1f9de8de1b] <==
	{"level":"warn","ts":"2025-12-02T20:32:18.443389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.488833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.507860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.550196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.618783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.648286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.679597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.712496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.742967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.769940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.797444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.844420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.881965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.905409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.944001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:18.965746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.014893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.056216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.075357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.094133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.116086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.146226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.161409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.212743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:32:19.242667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:32:38 up  2:14,  0 user,  load average: 1.60, 1.66, 1.72
	Linux pause-774682 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0efee45e9768d1e83b9fae7ed50c49e3d3ae215918643446198defc1f54a35b1] <==
	I1202 20:32:15.313439       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:32:15.313951       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:32:15.314152       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:32:15.314197       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:32:15.314257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:32:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:32:15.590539       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:32:15.590626       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:32:15.595857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:32:15.604130       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 20:32:20.432957       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:32:20.433080       1 metrics.go:72] Registering metrics
	I1202 20:32:20.433185       1 controller.go:711] "Syncing nftables rules"
	I1202 20:32:25.590810       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:25.590954       1 main.go:301] handling current node
	I1202 20:32:35.590877       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:35.590929       1 main.go:301] handling current node
	
	
	==> kindnet [2951d99fb4dcbff701c97808399b1ff8aa99a8e8c9f4ff303e5dcd3a11e69d29] <==
	I1202 20:31:22.604353       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 20:31:22.690602       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1202 20:31:22.690740       1 main.go:148] setting mtu 1500 for CNI 
	I1202 20:31:22.690760       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 20:31:22.690776       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T20:31:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 20:31:22.891184       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 20:31:22.891254       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 20:31:22.891289       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 20:31:22.891615       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 20:31:52.891946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 20:31:52.891966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 20:31:52.892063       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 20:31:52.892153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1202 20:31:54.491992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 20:31:54.492096       1 metrics.go:72] Registering metrics
	I1202 20:31:54.492182       1 controller.go:711] "Syncing nftables rules"
	I1202 20:32:02.897723       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1202 20:32:02.897780       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a512904d19713ec18413a4d149443e9c01ab0567733a1477ec83a78dbbb44d1] <==
	I1202 20:32:20.332278       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 20:32:20.336196       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:32:20.336284       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 20:32:20.336355       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1202 20:32:20.336395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:32:20.337261       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:32:20.337327       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:32:20.344447       1 aggregator.go:171] initial CRD sync complete...
	I1202 20:32:20.344477       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 20:32:20.344485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 20:32:20.344492       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:32:20.344747       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 20:32:20.344795       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 20:32:20.357185       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 20:32:20.383794       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:32:20.390754       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 20:32:20.390794       1 policy_source.go:240] refreshing policies
	E1202 20:32:20.450197       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:32:20.464313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:32:21.012112       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:32:21.335659       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:32:22.709035       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 20:32:22.951606       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:32:23.004258       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:32:23.105124       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [88c92b704d85b8c7806dc927206e2665a9600188d1426de4a7fb8070836531e7] <==
	W1202 20:32:07.161304       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161366       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161415       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161467       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161518       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161565       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161613       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.161928       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.168463       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169916       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170027       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170166       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169384       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169413       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169440       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169463       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169491       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169517       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169543       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169567       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169593       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169620       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.169673       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.170930       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 20:32:07.171035       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d72bb0bdd409ca1b9eee6c2382b896504c9bf98f1c09d2a7c0bb239af2a5c6bc] <==
	I1202 20:31:21.203783       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:31:21.205227       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:31:21.206426       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:31:21.211842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:31:21.217010       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:31:21.217035       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:31:21.217043       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:31:21.232374       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:31:21.233021       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1202 20:31:21.241783       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:31:21.248242       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:31:21.249470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 20:31:21.249839       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1202 20:31:21.249867       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:31:21.249884       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:31:21.249918       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:31:21.252183       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:31:21.252639       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1202 20:31:21.253018       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:31:21.253412       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:31:21.253452       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 20:31:21.253575       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:31:21.253601       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:31:21.253616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:32:06.207297       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ffa7d32b088716bc879c84e1d44bfb00c39ab2a6ae1fabd5a99482106d3200cb] <==
	I1202 20:32:22.695249       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:32:22.696982       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 20:32:22.697050       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:32:22.700709       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 20:32:22.700809       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:32:22.704698       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:32:22.704797       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 20:32:22.708183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:32:22.716729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 20:32:22.720065       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1202 20:32:22.722547       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:32:22.724725       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:32:22.724747       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:32:22.724754       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:32:22.728065       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1202 20:32:22.731672       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1202 20:32:22.733934       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1202 20:32:22.744939       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1202 20:32:22.744990       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:32:22.745012       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1202 20:32:22.745476       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1202 20:32:22.745027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1202 20:32:22.745885       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1202 20:32:22.745915       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1202 20:32:22.769436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [155b4f24673bf135703ca3c7d2801c37e6f9067375a70dabce8152b9423f0975] <==
	I1202 20:31:22.622961       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:31:22.703770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:31:22.810020       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:31:22.825979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:31:22.846001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:31:22.888727       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:31:22.888847       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:31:22.898624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:31:22.899220       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:31:22.899605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:31:22.901541       1 config.go:200] "Starting service config controller"
	I1202 20:31:22.901604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:31:22.901794       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:31:22.913624       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:31:22.913825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 20:31:22.902142       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:31:22.913871       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:31:22.913877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:31:22.901823       1 config.go:309] "Starting node config controller"
	I1202 20:31:22.913901       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:31:22.913917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:31:23.001869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8d6248562e377d650b3de936b7b6cfc41643df49e0c4434a12802fe78a3de33d] <==
	I1202 20:32:19.982146       1 server_linux.go:53] "Using iptables proxy"
	I1202 20:32:20.192629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:32:20.493959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:32:20.494004       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1202 20:32:20.494068       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:32:20.540812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 20:32:20.540884       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:32:20.548317       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:32:20.548628       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:32:20.548650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:32:20.551090       1 config.go:200] "Starting service config controller"
	I1202 20:32:20.554204       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:32:20.555675       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:32:20.555720       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:32:20.555742       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:32:20.555747       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:32:20.557142       1 config.go:309] "Starting node config controller"
	I1202 20:32:20.557164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:32:20.557171       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:32:20.656301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:32:20.656345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:32:20.656380       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [18e2a378f13b253e22ab34cc34281b5b422300fcb3e6469661edf596e32536a6] <==
	I1202 20:32:18.825412       1 serving.go:386] Generated self-signed cert in-memory
	W1202 20:32:20.244889       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 20:32:20.244977       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 20:32:20.245012       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 20:32:20.245044       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 20:32:20.348420       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1202 20:32:20.351879       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:32:20.354838       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:32:20.355483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:20.370112       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:20.355507       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 20:32:20.470286       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3] <==
	E1202 20:31:14.864433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:31:14.864513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:31:14.864575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:31:14.864926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:31:14.865141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:31:14.865212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:31:14.865259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:31:14.865299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:31:14.865340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:31:14.865409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:31:14.865450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:31:14.870176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1202 20:31:14.870178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:31:14.870282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:31:14.870329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:31:14.870384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:31:14.869907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:31:15.720884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1202 20:31:18.551842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:07.159056       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 20:32:07.159083       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 20:32:07.159103       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 20:32:07.159136       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:32:07.159290       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 20:32:07.159305       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907072    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907388    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.907756    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.908047    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.908336    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: I1202 20:32:14.918256    1312 scope.go:117] "RemoveContainer" containerID="749b87a7b151c1bd34b12295b4ce03363d1907f5b6013f49538c2eb3da697036"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.918851    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919060    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919307    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919539    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-k2d8x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4" pod="kube-system/coredns-66bc5c9577-k2d8x"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.919977    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e719364ba9dec0a70dcd38332b913eb4" pod="kube-system/kube-controller-manager-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.920286    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.920589    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: I1202 20:32:14.943247    1312 scope.go:117] "RemoveContainer" containerID="f224e46bf3d2999dec334625d9e8d190bef56cb08335644bbdbbdfbcee7d03a3"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944026    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="597fada52996194c31d0bb778894ba14" pod="kube-system/kube-apiserver-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944281    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="85043deead313637703f384cbb896f2a" pod="kube-system/kube-scheduler-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944487    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-hh7zt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2b9582da-2b2c-4243-baf2-b681960b8809" pod="kube-system/kindnet-hh7zt"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944668    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6m8f8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fccfb9dc-b054-469a-8dc0-1fa4c56ec683" pod="kube-system/kube-proxy-6m8f8"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.944879    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-k2d8x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a9ab0b8e-b81f-4abe-b30f-2fa936fa2aa4" pod="kube-system/coredns-66bc5c9577-k2d8x"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.945073    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e719364ba9dec0a70dcd38332b913eb4" pod="kube-system/kube-controller-manager-pause-774682"
	Dec 02 20:32:14 pause-774682 kubelet[1312]: E1202 20:32:14.945260    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-774682\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="565e38cd573797c94a307fd326fe2c3f" pod="kube-system/etcd-pause-774682"
	Dec 02 20:32:26 pause-774682 kubelet[1312]: W1202 20:32:26.978177    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 02 20:32:33 pause-774682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 02 20:32:33 pause-774682 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 02 20:32:33 pause-774682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-774682 -n pause-774682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-774682 -n pause-774682: exit status 2 (346.711861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-774682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (7200.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-355440 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1202 20:43:08.151880    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.158290    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.169730    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.191199    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.232657    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.314860    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.476408    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:08.798413    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:09.439821    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:10.721693    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:13.282969    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:18.405186    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:28.646869    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:46.176314    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:43:49.128318    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.089828    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.666775    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.673169    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.684564    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.705942    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.747371    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.828770    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:30.990258    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:31.311922    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:31.953953    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:33.235554    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:35.798162    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:40.919591    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:44:51.161865    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:45:11.643589    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:45:52.011326    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/old-k8s-version-546313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:45:52.607058    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:46:45.854663    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:46:57.357089    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:47:14.529729    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/default-k8s-diff-port-952865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (14m44s)
		TestStartStop (17m17s)
		TestStartStop/group/newest-cni (4m57s)
		TestStartStop/group/newest-cni/serial (4m57s)
		TestStartStop/group/newest-cni/serial/FirstStart (4m57s)
		TestStartStop/group/no-preload (6m50s)
		TestStartStop/group/no-preload/serial (6m50s)
		TestStartStop/group/no-preload/serial/FirstStart (6m50s)

                                                
                                                
goroutine 5443 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000654380, 0x400128fbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400000e150, {0x534c580, 0x2c, 0x2c}, {0x400128fd08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400042fcc0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400042fcc0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3318 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x40015f2300, 0x40019716c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2679
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 183 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000234080?}, 0x2f6e6f6974617267?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 176
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5121 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001604a90, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001604a80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014c8480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40015d0fc0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x400011a150?}, 0x40016506b8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x400011a150}, 0x4001461f38, {0x369d680, 0x40012f4570}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001650788?, {0x369d680?, 0x40012f4570?}, 0xc0?, 0x40016507a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000088af0, 0x3b9aca00, 0x0, 0x1, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5086
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4841 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40018afc00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40018afc00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40018afc00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40018afc00, 0x4001c10300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 836 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000234080?}, 0x40015f2480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 150 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001921510, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001921500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001a693e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001970850?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x400011a150?}, 0x40013f9ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x400011a150}, 0x40012ecf38, {0x369d680, 0x40019fc360}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013f9fa8?, {0x369d680?, 0x40019fc360?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400158e3c0, 0x3b9aca00, 0x0, 0x1, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 643 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff6e39e200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000476980?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4000476980)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4000476980)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x400078ba00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x400078ba00)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000166900, {0x36d3120, 0x400078ba00})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000166900)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 641
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5280 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0x40015f2180, 0x40019701c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5277
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5432 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0x4000270780, 0x4001970930)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5429
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 830 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40008fe290, 0x2a)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40008fe280)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400182dc80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000272a10?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x400011a150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x400011a150}, 0x4001435f38, {0x369d680, 0x4001aaf9b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x4001aaf9b0?}, 0x20?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400158ea50, 0x3b9aca00, 0x0, 0x1, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 151 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x400011a150}, 0x40013fcf40, 0x4001432f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x400011a150}, 0x0?, 0x40013fcf40, 0x40013fcf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x400011a150?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40006c0300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 184 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001a693e0, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 176
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 837 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400182dc80, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 2154 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x4000270f00, 0x400011b0a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2153
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4765 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40018af500, 0x400196c348)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4463
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5278 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0xffff6e39e400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40018143c0?, 0x4001544a87?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40018143c0, {0x4001544a87, 0x579, 0x579})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6228, {0x4001544a87?, 0x4001655568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001ac67b0, {0x369ba58, 0x4000126f70})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x4001ac67b0}, {0x369ba58, 0x4000126f70}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6228?, {0x369bc40, 0x4001ac67b0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6228, {0x369bc40, 0x4001ac67b0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x4001ac67b0}, {0x369bad8, 0x40000a6228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40006c0780?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5277
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 3110 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x40006c0480, 0x40016247e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3109
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4719 [chan receive, 5 minutes]:
testing.(*T).Run(0x400154b6c0, {0x296e9ac?, 0x0?}, 0x4000476a80)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400154b6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400154b6c0, 0x40016041c0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4717
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 832 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 831
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 2437 [select, 98 minutes]:
net/http.(*persistConn).writeLoop(0x40014d6240)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 2434
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 4843 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015d2fc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015d2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015d2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015d2fc0, 0x4001c10400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5085 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000234080?}, 0x40014276c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4463 [chan receive, 15 minutes]:
testing.(*T).Run(0x40018aec40, {0x296d53a?, 0x68ab89f0167?}, 0x400196c348)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40018aec40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40018aec40, 0x339b500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2299 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0x40014ae900, 0x40015d1650)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 764
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2436 [select, 98 minutes]:
net/http.(*persistConn).readLoop(0x40014d6240)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 2434
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 4737 [chan receive, 6 minutes]:
testing.(*T).Run(0x400154ba40, {0x296e9ac?, 0x0?}, 0x4000476b00)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400154ba40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400154ba40, 0x4001604240)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4717
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3138 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0x40015f3680, 0x4001971d50)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 3137
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4842 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40018afdc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40018afdc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40018afdc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40018afdc0, 0x4001c10380)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5428 [chan receive, 5 minutes]:
testing.(*T).Run(0x40014261c0, {0x2978516?, 0x40000006ee?}, 0x4000476b80)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40014261c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40014261c0, 0x4000476a80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4719
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2515 [IO wait, 98 minutes]:
internal/poll.runtime_pollWait(0xffff6e39f000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40000e0c00?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40000e0c00)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40000e0c00)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40002359c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40002359c0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000480000, {0x36d3120, 0x40002359c0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000480000)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2513
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5040 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x400011a150}, 0x400009f740, 0x400009f788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x400011a150}, 0x10?, 0x400009f740, 0x400009f788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x400011a150?}, 0x0?, 0x40018f88c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400143a000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5036
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5035 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000234080?}, 0x4000270600?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5031
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5036 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001814900, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5031
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5279 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0xffff6df53600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001814480?, 0x400171b84d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001814480, {0x400171b84d, 0x7b3, 0x7b3})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6240, {0x400171b84d?, 0x40013fc568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001ac67e0, {0x369ba58, 0x4000126f78})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x4001ac67e0}, {0x369ba58, 0x4000126f78}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6240?, {0x369bc40, 0x4001ac67e0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6240, {0x369bc40, 0x4001ac67e0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x4001ac67e0}, {0x369bad8, 0x40000a6240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40006c0480?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5277
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4840 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40018afa40)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40018afa40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40018afa40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40018afa40, 0x4001c10280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 831 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x400011a150}, 0x40013fb740, 0x4001430f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x400011a150}, 0x50?, 0x40013fb740, 0x40013fb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x400011a150?}, 0x40002a5570?, 0x400193a780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400143e900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5123 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5122
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4766 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40018af6c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40018af6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40018af6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40018af6c0, 0x4001c10000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4913 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40014d8380)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40014d8380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014d8380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40014d8380, 0x4000476e80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5431 [IO wait]:
internal/poll.runtime_pollWait(0xffff6df52e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40016745a0?, 0x400151c600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40016745a0, {0x400151c600, 0x1a00, 0x1a00})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6278, {0x400151c600?, 0x40000a5568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40018c8fc0, {0x369ba58, 0x4000126480})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x40018c8fc0}, {0x369ba58, 0x4000126480}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6278?, {0x369bc40, 0x40018c8fc0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6278, {0x369bc40, 0x40018c8fc0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x40018c8fc0}, {0x369bad8, 0x40000a6278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001426540?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5429
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 2215 [chan send, 99 minutes]:
os/exec.(*Cmd).watchCtx(0x400150e000, 0x400169b8f0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2214
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5276 [chan receive, 6 minutes]:
testing.(*T).Run(0x40015d36c0, {0x2978516?, 0x40000006ee?}, 0x4000476b80)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40015d36c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40015d36c0, 0x4000476b00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4737
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5122 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x400011a150}, 0x4001652f40, 0x4001652f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x400011a150}, 0xe8?, 0x4001652f40, 0x4001652f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x400011a150?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4000489ba0?, 0x95c64?, 0x4000271e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5086
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4896 [chan receive, 15 minutes]:
testing.(*testState).waitParallel(0x4000474640)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40014d81c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40014d81c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014d81c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40014d81c0, 0x4000476d80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4765
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5430 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0xffff6df53800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40016744e0?, 0x4001545ab1?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40016744e0, {0x4001545ab1, 0x54f, 0x54f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6238, {0x4001545ab1?, 0x4001653d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40018c8f90, {0x369ba58, 0x4000126478})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bc40, 0x40018c8f90}, {0x369ba58, 0x4000126478}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6238?, {0x369bc40, 0x40018c8f90})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6238, {0x369bc40, 0x40018c8f90})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bc40, 0x40018c8f90}, {0x369bad8, 0x40000a6238}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000270600?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5429
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5277 [syscall, 6 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x15, 0x400145fb48, 0x4, 0x40006c8f30, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x400145fca8?, 0x1929a0?, 0xfffff073c1a5?, 0x0?, 0x4000165080?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x400078a280)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x400145fc78?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40015f2180)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40015f2180)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40015d3880, 0x40015f2180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateFirstStart({0x36e5778?, 0x40004d2380?}, 0x40015d3880, {0x40003a03d8?, 0x7d451d3740f?}, {0x22632926?, 0x2263292600161e84?}, {0x692f4ee6?, 0x400145ff58?}, {0x4000480300?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0x88
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40015d3880?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40015d3880, 0x4000476b80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5276
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2713 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2712
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4717 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400154b340, 0x339b730)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 4546
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 2761 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe880, {{0x36f3430, 0x4000234080?}, 0x40015f3800?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2760
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2711 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40006cb1d0, 0x21)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006cb1c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400177f9e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004e84d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x400011a150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x400011a150}, 0x400152df38, {0x369d680, 0x40020c06f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x40020c06f0?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000432c20, 0x3b9aca00, 0x0, 0x1, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2762
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 2712 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b10, 0x400011a150}, 0x40013fbf40, 0x400152af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b10, 0x400011a150}, 0x51?, 0x40013fbf40, 0x40013fbf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b10?, 0x400011a150?}, 0x4001cdbb00?, 0x4001cf28c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40015f3980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2762
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 2762 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400177f9e0, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2760
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5041 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5040
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5086 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014c8480, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5039 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40019219d0, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019219c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001814900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400027cd20?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b10?, 0x400011a150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b10, 0x400011a150}, 0x400011cf38, {0x369d680, 0x4001595710}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3430?, {0x369d680?, 0x4001595710?}, 0x40?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40003933c0, 0x3b9aca00, 0x0, 0x1, 0x400011a150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5036
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4546 [chan receive, 18 minutes]:
testing.(*T).Run(0x400154b180, {0x296d53a?, 0x400152ef58?}, 0x339b730)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x400154b180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x400154b180, 0x339b548)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5429 [syscall, 5 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x4000120b48, 0x4, 0x40006c9170, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4000120ca8?, 0x1929a0?, 0xfffff073c1a5?, 0x0?, 0x4001c849c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40008fe4c0)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4000120c78?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000270780)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000270780)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4001426540, 0x4000270780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateFirstStart({0x36e5778?, 0x40004d64d0?}, 0x4001426540, {0x40014e6228?, 0x7ee8d36e366?}, {0xe736e74?, 0xe736e7400161e84?}, {0x692f4f57?, 0x4000120f58?}, {0x4000480500?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0x88
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4001426540?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4001426540, 0x4000476b80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 5428
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                    

Test pass (218/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 40.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 34.55
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.33
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 154.91
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 9.9
57 TestAddons/StoppedEnableDisable 12.44
58 TestCertOptions 35.27
59 TestCertExpiration 249.65
61 TestForceSystemdFlag 35.17
62 TestForceSystemdEnv 35.33
67 TestErrorSpam/setup 33.48
68 TestErrorSpam/start 0.78
69 TestErrorSpam/status 1.07
70 TestErrorSpam/pause 6.77
71 TestErrorSpam/unpause 5.75
72 TestErrorSpam/stop 1.5
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 77.54
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 26.68
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
84 TestFunctional/serial/CacheCmd/cache/add_local 1.3
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 33.05
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.44
96 TestFunctional/serial/InvalidService 4.62
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 14.76
100 TestFunctional/parallel/DryRun 0.44
101 TestFunctional/parallel/InternationalLanguage 0.19
102 TestFunctional/parallel/StatusCmd 1.02
107 TestFunctional/parallel/AddonsCmd 0.21
108 TestFunctional/parallel/PersistentVolumeClaim 25.95
110 TestFunctional/parallel/SSHCmd 0.82
111 TestFunctional/parallel/CpCmd 2.43
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.94
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
122 TestFunctional/parallel/License 0.33
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
138 TestFunctional/parallel/MountCmd/any-port 7.14
139 TestFunctional/parallel/MountCmd/specific-port 1.95
140 TestFunctional/parallel/MountCmd/VerifyCleanup 2.18
141 TestFunctional/parallel/ServiceCmd/List 0.61
142 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 0.96
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
153 TestFunctional/parallel/ImageCommands/Setup 0.66
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.46
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.91
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.84
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.06
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.45
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.41
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.35
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.65
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.28
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.95
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.13
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.81
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.24
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.51
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.14
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 196.7
265 TestMultiControlPlane/serial/DeployApp 6.82
266 TestMultiControlPlane/serial/PingHostFromPods 1.4
267 TestMultiControlPlane/serial/AddWorkerNode 58.17
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
270 TestMultiControlPlane/serial/CopyFile 19.63
271 TestMultiControlPlane/serial/StopSecondaryNode 12.81
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
273 TestMultiControlPlane/serial/RestartSecondaryNode 30.31
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.33
278 TestMultiControlPlane/serial/StopCluster 24.13
281 TestMultiControlPlane/serial/AddSecondaryNode 95.07
287 TestJSONOutput/start/Command 78.48
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.83
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.23
312 TestKicCustomNetwork/create_custom_network 57.25
313 TestKicCustomNetwork/use_default_bridge_network 36.6
314 TestKicExistingNetwork 33.15
315 TestKicCustomSubnet 35.2
316 TestKicStaticIP 35.81
317 TestMainNoArgs 0.1
318 TestMinikubeProfile 70.65
321 TestMountStart/serial/StartWithMountFirst 8.73
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.85
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.33
328 TestMountStart/serial/RestartStopped 8.52
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 141.03
333 TestMultiNode/serial/DeployApp2Nodes 4.82
334 TestMultiNode/serial/PingHostFrom2Pods 0.92
335 TestMultiNode/serial/AddNode 58.4
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.76
339 TestMultiNode/serial/StopNode 2.39
340 TestMultiNode/serial/StartAfterStop 8.17
341 TestMultiNode/serial/RestartKeepsNodes 76.25
342 TestMultiNode/serial/DeleteNode 5.65
343 TestMultiNode/serial/StopMultiNode 23.97
344 TestMultiNode/serial/RestartMultiNode 53.7
345 TestMultiNode/serial/ValidateNameConflict 35.93
350 TestPreload 149.94
352 TestScheduledStopUnix 108.25
355 TestInsufficientStorage 10.41
356 TestRunningBinaryUpgrade 62.55
359 TestMissingContainerUpgrade 143.43
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 39.23
363 TestNoKubernetes/serial/StartWithStopK8s 11.14
364 TestNoKubernetes/serial/Start 9.01
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
367 TestNoKubernetes/serial/ProfileList 1.28
368 TestNoKubernetes/serial/Stop 1.4
369 TestNoKubernetes/serial/StartNoArgs 7.71
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
371 TestStoppedBinaryUpgrade/Setup 11
372 TestStoppedBinaryUpgrade/Upgrade 299.29
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.69
382 TestPause/serial/Start 82.2
383 TestPause/serial/SecondStartNoReconfiguration 27.07
x
+
TestDownloadOnly/v1.28.0/json-events (40.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (40.861994335s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (40.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 18:48:41.239387    4470 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1202 18:48:41.239468    4470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-840542
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-840542: exit status 85 (86.783599ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-840542 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:48:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:48:00.451305    4478 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:48:00.451581    4478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:48:00.451593    4478 out.go:374] Setting ErrFile to fd 2...
	I1202 18:48:00.451616    4478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:48:00.452016    4478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	W1202 18:48:00.452218    4478 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22021-2526/.minikube/config/config.json: open /home/jenkins/minikube-integration/22021-2526/.minikube/config/config.json: no such file or directory
	I1202 18:48:00.452764    4478 out.go:368] Setting JSON to true
	I1202 18:48:00.453945    4478 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1819,"bootTime":1764699462,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:48:00.454035    4478 start.go:143] virtualization:  
	I1202 18:48:00.466558    4478 out.go:99] [download-only-840542] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1202 18:48:00.466812    4478 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 18:48:00.467065    4478 notify.go:221] Checking for updates...
	I1202 18:48:00.470590    4478 out.go:171] MINIKUBE_LOCATION=22021
	I1202 18:48:00.474952    4478 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:48:00.481028    4478 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:48:00.486628    4478 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:48:00.492122    4478 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 18:48:00.499442    4478 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 18:48:00.499839    4478 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:48:00.536318    4478 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:48:00.536445    4478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:48:00.949547    4478 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 18:48:00.940303789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:48:00.949702    4478 docker.go:319] overlay module found
	I1202 18:48:00.952895    4478 out.go:99] Using the docker driver based on user configuration
	I1202 18:48:00.952934    4478 start.go:309] selected driver: docker
	I1202 18:48:00.952941    4478 start.go:927] validating driver "docker" against <nil>
	I1202 18:48:00.953051    4478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:48:01.011014    4478 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 18:48:01.001793776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:48:01.011197    4478 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 18:48:01.011549    4478 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 18:48:01.011733    4478 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 18:48:01.015166    4478 out.go:171] Using Docker driver with root privileges
	I1202 18:48:01.018365    4478 cni.go:84] Creating CNI manager for ""
	I1202 18:48:01.018443    4478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:48:01.018455    4478 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 18:48:01.018535    4478 start.go:353] cluster config:
	{Name:download-only-840542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-840542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:48:01.021738    4478 out.go:99] Starting "download-only-840542" primary control-plane node in "download-only-840542" cluster
	I1202 18:48:01.021766    4478 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 18:48:01.024813    4478 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 18:48:01.024874    4478 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 18:48:01.024940    4478 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 18:48:01.041009    4478 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 18:48:01.041240    4478 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 18:48:01.041349    4478 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 18:48:01.096151    4478 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1202 18:48:01.096179    4478 cache.go:65] Caching tarball of preloaded images
	I1202 18:48:01.096346    4478 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 18:48:01.099786    4478 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 18:48:01.099814    4478 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1202 18:48:01.193103    4478 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1202 18:48:01.193302    4478 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1202 18:48:05.722187    4478 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	
	
	* The control-plane node download-only-840542 host does not exist
	  To start a cluster, run: "minikube start -p download-only-840542"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-840542
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (34.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-790899 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-790899 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (34.551512093s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (34.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 18:49:16.215640    4470 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1202 18:49:16.215670    4470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-790899
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-790899: exit status 85 (83.707653ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-840542 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ delete  │ -p download-only-840542                                                                                                                                                   │ download-only-840542 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ start   │ -o=json --download-only -p download-only-790899 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-790899 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:48:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:48:41.704156    4680 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:48:41.704346    4680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:48:41.704376    4680 out.go:374] Setting ErrFile to fd 2...
	I1202 18:48:41.704395    4680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:48:41.704648    4680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:48:41.705074    4680 out.go:368] Setting JSON to true
	I1202 18:48:41.705853    4680 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1860,"bootTime":1764699462,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:48:41.705935    4680 start.go:143] virtualization:  
	I1202 18:48:41.709351    4680 out.go:99] [download-only-790899] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 18:48:41.709638    4680 notify.go:221] Checking for updates...
	I1202 18:48:41.712579    4680 out.go:171] MINIKUBE_LOCATION=22021
	I1202 18:48:41.715781    4680 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:48:41.718681    4680 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:48:41.721592    4680 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:48:41.724442    4680 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 18:48:41.730192    4680 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 18:48:41.730436    4680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:48:41.764550    4680 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:48:41.764667    4680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:48:41.819675    4680 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:48:41.810917683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:48:41.819785    4680 docker.go:319] overlay module found
	I1202 18:48:41.822792    4680 out.go:99] Using the docker driver based on user configuration
	I1202 18:48:41.822823    4680 start.go:309] selected driver: docker
	I1202 18:48:41.822830    4680 start.go:927] validating driver "docker" against <nil>
	I1202 18:48:41.822923    4680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:48:41.875120    4680 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:48:41.866460624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:48:41.875269    4680 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 18:48:41.875530    4680 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 18:48:41.875671    4680 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 18:48:41.878771    4680 out.go:171] Using Docker driver with root privileges
	I1202 18:48:41.881609    4680 cni.go:84] Creating CNI manager for ""
	I1202 18:48:41.881684    4680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 18:48:41.881700    4680 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 18:48:41.881799    4680 start.go:353] cluster config:
	{Name:download-only-790899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-790899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 18:48:41.884691    4680 out.go:99] Starting "download-only-790899" primary control-plane node in "download-only-790899" cluster
	I1202 18:48:41.884715    4680 cache.go:134] Beginning downloading kic base image for docker with crio
	I1202 18:48:41.887724    4680 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 18:48:41.887765    4680 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:48:41.887935    4680 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 18:48:41.903238    4680 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 18:48:41.903377    4680 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 18:48:41.903400    4680 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 18:48:41.903405    4680 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 18:48:41.903412    4680 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 18:48:41.954591    4680 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1202 18:48:41.954618    4680 cache.go:65] Caching tarball of preloaded images
	I1202 18:48:41.954780    4680 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 18:48:41.957988    4680 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 18:48:41.958021    4680 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1202 18:48:42.056410    4680 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1202 18:48:42.056462    4680 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22021-2526/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-790899 host does not exist
	  To start a cluster, run: "minikube start -p download-only-790899"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-790899
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-899383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-899383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.333326131s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-899383
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-899383: exit status 85 (80.830557ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-840542 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-840542 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ delete  │ -p download-only-840542                                                                                                                                                          │ download-only-840542 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │ 02 Dec 25 18:48 UTC │
	│ start   │ -o=json --download-only -p download-only-790899 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-790899 │ jenkins │ v1.37.0 │ 02 Dec 25 18:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ delete  │ -p download-only-790899                                                                                                                                                          │ download-only-790899 │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │ 02 Dec 25 18:49 UTC │
	│ start   │ -o=json --download-only -p download-only-899383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-899383 │ jenkins │ v1.37.0 │ 02 Dec 25 18:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 18:49:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 18:49:16.687527    4887 out.go:360] Setting OutFile to fd 1 ...
	I1202 18:49:16.687645    4887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:16.687659    4887 out.go:374] Setting ErrFile to fd 2...
	I1202 18:49:16.687664    4887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 18:49:16.687892    4887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 18:49:16.688262    4887 out.go:368] Setting JSON to true
	I1202 18:49:16.688927    4887 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1895,"bootTime":1764699462,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 18:49:16.688989    4887 start.go:143] virtualization:  
	I1202 18:49:16.692363    4887 out.go:99] [download-only-899383] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 18:49:16.692566    4887 notify.go:221] Checking for updates...
	I1202 18:49:16.695998    4887 out.go:171] MINIKUBE_LOCATION=22021
	I1202 18:49:16.698907    4887 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 18:49:16.701725    4887 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 18:49:16.704586    4887 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 18:49:16.707421    4887 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 18:49:16.712977    4887 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 18:49:16.713235    4887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 18:49:16.737884    4887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 18:49:16.737980    4887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:16.803714    4887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:16.795260887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:16.803810    4887 docker.go:319] overlay module found
	I1202 18:49:16.806817    4887 out.go:99] Using the docker driver based on user configuration
	I1202 18:49:16.806852    4887 start.go:309] selected driver: docker
	I1202 18:49:16.806859    4887 start.go:927] validating driver "docker" against <nil>
	I1202 18:49:16.806962    4887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 18:49:16.865341    4887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:49:16.851716723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 18:49:16.865500    4887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 18:49:16.865788    4887 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 18:49:16.865938    4887 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 18:49:16.869009    4887 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-899383 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899383"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-899383
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 18:49:20.405750    4470 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-279600 --alsologtostderr --binary-mirror http://127.0.0.1:42717 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-279600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-279600
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-391119
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-391119: exit status 85 (75.236088ms)

                                                
                                                
-- stdout --
	* Profile "addons-391119" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391119"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-391119
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-391119: exit status 85 (88.615204ms)

                                                
                                                
-- stdout --
	* Profile "addons-391119" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391119"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (154.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-391119 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-391119 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.914400305s)
--- PASS: TestAddons/Setup (154.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-391119 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-391119 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-391119 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-391119 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9e10da12-ac5a-4af7-9fd2-88eea49a93f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9e10da12-ac5a-4af7-9fd2-88eea49a93f1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003364725s
addons_test.go:694: (dbg) Run:  kubectl --context addons-391119 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-391119 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-391119 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-391119 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-391119
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-391119: (12.158111013s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-391119
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-391119
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-391119
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (35.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-403196 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1202 20:36:40.454029    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:36:45.853207    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:36:49.252746    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:36:57.357057    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-403196 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.414410024s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-403196 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-403196 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-403196 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-403196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-403196
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-403196: (2.105725377s)
--- PASS: TestCertOptions (35.27s)

                                                
                                    
x
+
TestCertExpiration (249.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-182891 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-182891 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (31.157573936s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-182891 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-182891 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (35.960849745s)
helpers_test.go:175: Cleaning up "cert-expiration-182891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-182891
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-182891: (2.524250183s)
--- PASS: TestCertExpiration (249.65s)

                                                
                                    
x
+
TestForceSystemdFlag (35.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-317144 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-317144 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.324541306s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-317144 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-317144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-317144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-317144: (2.486377979s)
--- PASS: TestForceSystemdFlag (35.17s)

                                                
                                    
x
+
TestForceSystemdEnv (35.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-639740 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1202 20:33:46.175398    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-639740 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.515128163s)
helpers_test.go:175: Cleaning up "force-systemd-env-639740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-639740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-639740: (2.810159743s)
--- PASS: TestForceSystemdEnv (35.33s)

                                                
                                    
x
+
TestErrorSpam/setup (33.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-121793 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-121793 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-121793 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-121793 --driver=docker  --container-runtime=crio: (33.480945648s)
--- PASS: TestErrorSpam/setup (33.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (6.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause: exit status 80 (1.943592567s)

                                                
                                                
-- stdout --
	* Pausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:55:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause: exit status 80 (2.442936046s)

                                                
                                                
-- stdout --
	* Pausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:55:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause: exit status 80 (2.378615816s)

                                                
                                                
-- stdout --
	* Pausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:55:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause: exit status 80 (1.655124305s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:56:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause: exit status 80 (1.920973874s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:56:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause: exit status 80 (2.17150704s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-121793 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T18:56:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.75s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 stop: (1.302878106s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121793 --log_dir /tmp/nospam-121793 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1202 18:56:57.364965    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.371368    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.382833    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.404288    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.445702    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.527257    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:57.688804    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:58.010933    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:58.654918    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:56:59.936731    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:57:02.498048    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:57:07.619402    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 18:57:17.861484    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-535807 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.535140894s)
--- PASS: TestFunctional/serial/StartWithProxy (77.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 18:57:29.240763    4470 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --alsologtostderr -v=8
E1202 18:57:38.343078    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-535807 --alsologtostderr -v=8: (26.672547565s)
functional_test.go:678: soft start took 26.67852683s for "functional-535807" cluster.
I1202 18:57:55.914138    4470 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (26.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-535807 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:3.1: (1.163749351s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:3.3: (1.200831955s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 cache add registry.k8s.io/pause:latest: (1.141120059s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-535807 /tmp/TestFunctionalserialCacheCmdcacheadd_local3490838195/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache add minikube-local-cache-test:functional-535807
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache delete minikube-local-cache-test:functional-535807
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-535807
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (311.431731ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 kubectl -- --context functional-535807 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-535807 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 18:58:19.305819    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-535807 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.049714045s)
functional_test.go:776: restart took 33.049803322s for "functional-535807" cluster.
I1202 18:58:36.626046    4470 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (33.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-535807 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 logs: (1.469682298s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 logs --file /tmp/TestFunctionalserialLogsFileCmd3235029313/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 logs --file /tmp/TestFunctionalserialLogsFileCmd3235029313/001/logs.txt: (1.435705698s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-535807 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-535807
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-535807: exit status 115 (378.486868ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31413 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-535807 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-535807 delete -f testdata/invalidsvc.yaml: (1.002676728s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 config get cpus: exit status 14 (85.163655ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 config get cpus: exit status 14 (61.515035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535807 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-535807 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 30783: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-535807 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.730091ms)

                                                
                                                
-- stdout --
	* [functional-535807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:09:12.406436   30307 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:09:12.406639   30307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:12.406668   30307 out.go:374] Setting ErrFile to fd 2...
	I1202 19:09:12.406688   30307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:12.407237   30307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:09:12.408442   30307 out.go:368] Setting JSON to false
	I1202 19:09:12.409447   30307 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3091,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:09:12.409559   30307 start.go:143] virtualization:  
	I1202 19:09:12.412710   30307 out.go:179] * [functional-535807] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:09:12.416658   30307 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:09:12.416768   30307 notify.go:221] Checking for updates...
	I1202 19:09:12.422554   30307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:09:12.425479   30307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:09:12.428341   30307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:09:12.431260   30307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:09:12.434090   30307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:09:12.437364   30307 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:09:12.437937   30307 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:09:12.465835   30307 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:09:12.465937   30307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:09:12.532948   30307 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:09:12.523106926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:09:12.533077   30307 docker.go:319] overlay module found
	I1202 19:09:12.536131   30307 out.go:179] * Using the docker driver based on existing profile
	I1202 19:09:12.538932   30307 start.go:309] selected driver: docker
	I1202 19:09:12.538952   30307 start.go:927] validating driver "docker" against &{Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:09:12.539091   30307 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:09:12.542518   30307 out.go:203] 
	W1202 19:09:12.545421   30307 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 19:09:12.548206   30307 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-535807 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-535807 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.39696ms)

                                                
                                                
-- stdout --
	* [functional-535807] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:09:12.223030   30260 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:09:12.223254   30260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:12.223267   30260 out.go:374] Setting ErrFile to fd 2...
	I1202 19:09:12.223273   30260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:09:12.223659   30260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:09:12.224013   30260 out.go:368] Setting JSON to false
	I1202 19:09:12.224875   30260 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3091,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:09:12.224942   30260 start.go:143] virtualization:  
	I1202 19:09:12.228616   30260 out.go:179] * [functional-535807] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 19:09:12.232551   30260 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:09:12.232701   30260 notify.go:221] Checking for updates...
	I1202 19:09:12.238651   30260 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:09:12.241512   30260 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:09:12.244436   30260 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:09:12.247407   30260 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:09:12.250242   30260 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:09:12.253696   30260 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:09:12.254309   30260 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:09:12.279081   30260 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:09:12.279188   30260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:09:12.340128   30260 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:09:12.331156778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:09:12.340231   30260 docker.go:319] overlay module found
	I1202 19:09:12.345129   30260 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 19:09:12.348033   30260 start.go:309] selected driver: docker
	I1202 19:09:12.348057   30260 start.go:927] validating driver "docker" against &{Name:functional-535807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-535807 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:09:12.348220   30260 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:09:12.351681   30260 out.go:203] 
	W1202 19:09:12.354694   30260 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 19:09:12.357515   30260 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c01d717b-481e-43e8-b93f-dadf345ac947] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003035489s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-535807 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-535807 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-535807 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-535807 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f35c1868-86a0-404c-b2c1-f331398cfabf] Pending
helpers_test.go:352: "sp-pod" [f35c1868-86a0-404c-b2c1-f331398cfabf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f35c1868-86a0-404c-b2c1-f331398cfabf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003106682s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-535807 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-535807 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-535807 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [91204ecd-60ca-4ad8-9b1a-c7b0ff2502ec] Pending
helpers_test.go:352: "sp-pod" [91204ecd-60ca-4ad8-9b1a-c7b0ff2502ec] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004100617s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-535807 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh -n functional-535807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cp functional-535807:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2566009513/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh -n functional-535807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh -n functional-535807 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4470/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /etc/test/nested/copy/4470/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4470.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /etc/ssl/certs/4470.pem"
2025/12/02 19:09:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4470.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /usr/share/ca-certificates/4470.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/44702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /etc/ssl/certs/44702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/44702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /usr/share/ca-certificates/44702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-535807 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "sudo systemctl is-active docker": exit status 1 (371.985558ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "sudo systemctl is-active containerd": exit status 1 (378.596615ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26977: os: process already finished
helpers_test.go:519: unable to terminate pid 26777: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-535807 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0491216b-bded-4baf-9244-2823982f0ded] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0491216b-bded-4baf-9244-2823982f0ded] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003826145s
I1202 18:58:54.599588    4470 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-535807 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.46.155 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-535807 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "376.443336ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "60.553563ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "350.120855ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.631906ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdany-port2092021586/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764702539877241739" to /tmp/TestFunctionalparallelMountCmdany-port2092021586/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764702539877241739" to /tmp/TestFunctionalparallelMountCmdany-port2092021586/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764702539877241739" to /tmp/TestFunctionalparallelMountCmdany-port2092021586/001/test-1764702539877241739
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (650.658579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:09:00.528952    4470 retry.go:31] will retry after 435.009828ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 19:08 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 19:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 19:08 test-1764702539877241739
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh cat /mount-9p/test-1764702539877241739
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-535807 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [90cbd987-e07e-475c-9fad-00865118f61b] Pending
helpers_test.go:352: "busybox-mount" [90cbd987-e07e-475c-9fad-00865118f61b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [90cbd987-e07e-475c-9fad-00865118f61b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [90cbd987-e07e-475c-9fad-00865118f61b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003097336s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-535807 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdany-port2092021586/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdspecific-port3404993548/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.305204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:09:07.360227    4470 retry.go:31] will retry after 559.780127ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdspecific-port3404993548/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "sudo umount -f /mount-9p": exit status 1 (283.112921ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-535807 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdspecific-port3404993548/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T" /mount1: exit status 1 (531.588837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:09:09.504251    4470 retry.go:31] will retry after 729.580842ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-535807 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-535807 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907143577/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 service list -o json: (1.429360961s)
functional_test.go:1504: Took "1.429443314s" to run "out/minikube-linux-arm64 -p functional-535807 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535807 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535807 image ls --format short --alsologtostderr:
I1202 19:09:28.672425   32815 out.go:360] Setting OutFile to fd 1 ...
I1202 19:09:28.672594   32815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:28.672617   32815 out.go:374] Setting ErrFile to fd 2...
I1202 19:09:28.672644   32815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:28.673010   32815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:09:28.674550   32815 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:28.674765   32815 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:28.675332   32815 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
I1202 19:09:28.693208   32815 ssh_runner.go:195] Run: systemctl --version
I1202 19:09:28.693259   32815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
I1202 19:09:28.727585   32815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
I1202 19:09:28.836195   32815 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535807 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535807 image ls --format table --alsologtostderr:
I1202 19:09:29.696609   33130 out.go:360] Setting OutFile to fd 1 ...
I1202 19:09:29.696710   33130 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.696715   33130 out.go:374] Setting ErrFile to fd 2...
I1202 19:09:29.696720   33130 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.697066   33130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:09:29.698570   33130 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.698708   33130 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.699256   33130 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
I1202 19:09:29.726320   33130 ssh_runner.go:195] Run: systemctl --version
I1202 19:09:29.726374   33130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
I1202 19:09:29.759379   33130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
I1202 19:09:29.868206   33130 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535807 image ls --format json --alsologtostderr:
[{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registr
y.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"bb747ca923a5e1139badd
d6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.i
o/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b
a04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"52862
2"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535807 image ls --format json --alsologtostderr:
I1202 19:09:29.431042   33062 out.go:360] Setting OutFile to fd 1 ...
I1202 19:09:29.431156   33062 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.431161   33062 out.go:374] Setting ErrFile to fd 2...
I1202 19:09:29.431167   33062 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.431506   33062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:09:29.434231   33062 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.434418   33062 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.435035   33062 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
I1202 19:09:29.455030   33062 ssh_runner.go:195] Run: systemctl --version
I1202 19:09:29.455086   33062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
I1202 19:09:29.475877   33062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
I1202 19:09:29.585971   33062 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535807 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535807 image ls --format yaml --alsologtostderr:
I1202 19:09:29.153508   32971 out.go:360] Setting OutFile to fd 1 ...
I1202 19:09:29.154081   32971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.154116   32971 out.go:374] Setting ErrFile to fd 2...
I1202 19:09:29.154135   32971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.154409   32971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:09:29.155043   32971 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.155208   32971 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.155772   32971 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
I1202 19:09:29.192319   32971 ssh_runner.go:195] Run: systemctl --version
I1202 19:09:29.192378   32971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
I1202 19:09:29.217585   32971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
I1202 19:09:29.327029   32971 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-535807 ssh pgrep buildkitd: exit status 1 (356.739889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr: (3.372310504s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 309de069e45
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-535807
--> 67652eccbae
Successfully tagged localhost/my-image:functional-535807
67652eccbae3283253a01e60d8afddccff3b18046c487f49e68450e67e08b120
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-535807 image build -t localhost/my-image:functional-535807 testdata/build --alsologtostderr:
I1202 19:09:29.501287   33067 out.go:360] Setting OutFile to fd 1 ...
I1202 19:09:29.501430   33067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.501435   33067 out.go:374] Setting ErrFile to fd 2...
I1202 19:09:29.501440   33067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:09:29.501810   33067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:09:29.502726   33067 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.504059   33067 config.go:182] Loaded profile config "functional-535807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:09:29.504667   33067 cli_runner.go:164] Run: docker container inspect functional-535807 --format={{.State.Status}}
I1202 19:09:29.528440   33067 ssh_runner.go:195] Run: systemctl --version
I1202 19:09:29.528489   33067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-535807
I1202 19:09:29.549036   33067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-535807/id_rsa Username:docker}
I1202 19:09:29.664430   33067 build_images.go:162] Building image from path: /tmp/build.1124384122.tar
I1202 19:09:29.664510   33067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 19:09:29.676209   33067 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1124384122.tar
I1202 19:09:29.681142   33067 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1124384122.tar: stat -c "%s %y" /var/lib/minikube/build/build.1124384122.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1124384122.tar': No such file or directory
I1202 19:09:29.681175   33067 ssh_runner.go:362] scp /tmp/build.1124384122.tar --> /var/lib/minikube/build/build.1124384122.tar (3072 bytes)
I1202 19:09:29.700206   33067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1124384122
I1202 19:09:29.713487   33067 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1124384122 -xf /var/lib/minikube/build/build.1124384122.tar
I1202 19:09:29.736683   33067 crio.go:315] Building image: /var/lib/minikube/build/build.1124384122
I1202 19:09:29.736780   33067 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-535807 /var/lib/minikube/build/build.1124384122 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1202 19:09:32.777967   33067 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-535807 /var/lib/minikube/build/build.1124384122 --cgroup-manager=cgroupfs: (3.041148566s)
I1202 19:09:32.778034   33067 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1124384122
I1202 19:09:32.785797   33067 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1124384122.tar
I1202 19:09:32.793161   33067 build_images.go:218] Built localhost/my-image:functional-535807 from /tmp/build.1124384122.tar
I1202 19:09:32.793190   33067 build_images.go:134] succeeded building to: functional-535807
I1202 19:09:32.793196   33067 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-535807
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image rm kicbase/echo-server:functional-535807 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-535807 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-535807
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-535807
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-535807
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-2526/.minikube/files/etc/test/nested/copy/4470/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:3.1: (1.2297941s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:3.3: (1.139880642s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 cache add registry.k8s.io/pause:latest: (1.086291252s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2838387205/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache add minikube-local-cache-test:functional-374330
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache delete minikube-local-cache-test:functional-374330
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-374330
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.00253ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 logs: (1.063079831s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4052399700/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 config get cpus: exit status 14 (78.572195ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 config get cpus: exit status 14 (68.672189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (180.998467ms)

                                                
                                                
-- stdout --
	* [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:38:53.051098   64407 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:53.051263   64407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.051290   64407 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:53.051307   64407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:53.052079   64407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:53.052497   64407 out.go:368] Setting JSON to false
	I1202 19:38:53.053341   64407 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4871,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:38:53.053413   64407 start.go:143] virtualization:  
	I1202 19:38:53.056737   64407 out.go:179] * [functional-374330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 19:38:53.059655   64407 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:38:53.059735   64407 notify.go:221] Checking for updates...
	I1202 19:38:53.065242   64407 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:38:53.068073   64407 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:38:53.070990   64407 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:38:53.073835   64407 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:38:53.076611   64407 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:38:53.079988   64407 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:53.080588   64407 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:38:53.107243   64407 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:38:53.107387   64407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:53.166432   64407 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:53.157897912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:53.166533   64407 docker.go:319] overlay module found
	I1202 19:38:53.169462   64407 out.go:179] * Using the docker driver based on existing profile
	I1202 19:38:53.172311   64407 start.go:309] selected driver: docker
	I1202 19:38:53.172334   64407 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:53.172437   64407 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:38:53.175803   64407 out.go:203] 
	W1202 19:38:53.178742   64407 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 19:38:53.181568   64407 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-374330 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (190.668162ms)

                                                
                                                
-- stdout --
	* [functional-374330] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:38:51.473946   64064 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:38:51.474143   64064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:51.474169   64064 out.go:374] Setting ErrFile to fd 2...
	I1202 19:38:51.474189   64064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:38:51.474586   64064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:38:51.475034   64064 out.go:368] Setting JSON to false
	I1202 19:38:51.475886   64064 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4870,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 19:38:51.475981   64064 start.go:143] virtualization:  
	I1202 19:38:51.479620   64064 out.go:179] * [functional-374330] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 19:38:51.483780   64064 notify.go:221] Checking for updates...
	I1202 19:38:51.486986   64064 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 19:38:51.490153   64064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:38:51.493122   64064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	I1202 19:38:51.496060   64064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	I1202 19:38:51.499010   64064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 19:38:51.501943   64064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:38:51.505387   64064 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:38:51.505980   64064 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:38:51.533719   64064 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 19:38:51.533826   64064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:38:51.591709   64064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 19:38:51.583002025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:38:51.591807   64064 docker.go:319] overlay module found
	I1202 19:38:51.595003   64064 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 19:38:51.597795   64064 start.go:309] selected driver: docker
	I1202 19:38:51.597817   64064 start.go:927] validating driver "docker" against &{Name:functional-374330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-374330 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:38:51.597927   64064 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:38:51.601377   64064 out.go:203] 
	W1202 19:38:51.604356   64064 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 19:38:51.607359   64064 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh -n functional-374330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cp functional-374330:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2647085349/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh -n functional-374330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh -n functional-374330 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4470/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /etc/test/nested/copy/4470/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4470.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /etc/ssl/certs/4470.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4470.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /usr/share/ca-certificates/4470.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/44702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /etc/ssl/certs/44702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/44702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /usr/share/ca-certificates/44702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "sudo systemctl is-active docker": exit status 1 (273.323361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "sudo systemctl is-active containerd": exit status 1 (280.478234ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-374330 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "311.661578ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.797776ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "343.294501ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.532087ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2052892142/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.491885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:38:39.484133    4470 retry.go:31] will retry after 490.812124ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2052892142/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "sudo umount -f /mount-9p": exit status 1 (273.456899ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-374330 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2052892142/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T" /mount1: exit status 1 (619.370029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:38:41.698053    4470 retry.go:31] will retry after 600.578408ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-374330 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-374330 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3215118325/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-374330 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-374330 image ls --format short --alsologtostderr:
I1202 19:38:57.663198   65470 out.go:360] Setting OutFile to fd 1 ...
I1202 19:38:57.663363   65470 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:57.663373   65470 out.go:374] Setting ErrFile to fd 2...
I1202 19:38:57.663378   65470 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:57.663633   65470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:38:57.664221   65470 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:57.664354   65470 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:57.664864   65470 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:38:57.681426   65470 ssh_runner.go:195] Run: systemctl --version
I1202 19:38:57.681489   65470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:38:57.698630   65470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:38:57.800129   65470 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-374330 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
│ localhost/my-image                      │ functional-374330 │ 8179a101ae21b │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ ccd634d9bcc36 │ 84.9MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-374330 image ls --format table --alsologtostderr:
I1202 19:39:02.174732   65966 out.go:360] Setting OutFile to fd 1 ...
I1202 19:39:02.175135   65966 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:39:02.175148   65966 out.go:374] Setting ErrFile to fd 2...
I1202 19:39:02.175155   65966 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:39:02.175864   65966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:39:02.177015   65966 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:39:02.177219   65966 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:39:02.177974   65966 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:39:02.195053   65966 ssh_runner.go:195] Run: systemctl --version
I1202 19:39:02.195107   65966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:39:02.211914   65966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:39:02.316475   65966 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-374330 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"f8b04330a2b846a451f8aa81ef55312d32aea5886a99f86b203009bfc146f498","repoDigests":["docker.io/library/3b8a247b6e0ae09b84af15beeb861daa221afd9cf763c9b957d03d9316530f7d-tmp@sha256:ddd118516631040d4a0ccddb7731ea297d0ba2cd182293dcab6aeda18c75cd1f"],"repoTags":[],"size":"1638178"},{
"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"8179a101ae21bbb1adf40da69b32bcdfb657dabf2743d87cc55cfd6d36cd2882","repoDigests":["localhost/my-image@sha256:44753800fea83627a4a520179ab5d291df3a6614d078614467a06fae3d0057b9"],"repoTags":["localhost/my-image:functional-374330"],"size":"1640791"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84947242"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010"],"repoTags":["registry.k8s.io/
kube-proxy:v1.35.0-beta.0"],"size":"74105124"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s
.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60854229"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72167568"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49819792"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-374330 image ls --format json --alsologtostderr:
I1202 19:39:01.942849   65930 out.go:360] Setting OutFile to fd 1 ...
I1202 19:39:01.943045   65930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:39:01.943075   65930 out.go:374] Setting ErrFile to fd 2...
I1202 19:39:01.943097   65930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:39:01.943467   65930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:39:01.944248   65930 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:39:01.944424   65930 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:39:01.945184   65930 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:39:01.965101   65930 ssh_runner.go:195] Run: systemctl --version
I1202 19:39:01.965161   65930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:39:01.983149   65930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:39:02.088349   65930 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-374330 image ls --format yaml --alsologtostderr:
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60854229"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3702403ab8dc0024f1be9dc9862dfa959771f2240cdb91763335dc79253c53bf
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72167568"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7d2be617f22b04cb68eeb15dadac7b04a6c6cca8b9bf6edff1337bdf3d567da6
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84947242"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:356477b01dc6337b94d3e8f5a29fd2f927b4af4932a4b16e5009efb6d14e8010
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74105124"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:66d9cce0df3bdcafff04c48bba04739320f3c4af865c3242d3c9be2bde891b23
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49819792"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-374330 image ls --format yaml --alsologtostderr:
I1202 19:38:57.881621   65508 out.go:360] Setting OutFile to fd 1 ...
I1202 19:38:57.881829   65508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:57.881845   65508 out.go:374] Setting ErrFile to fd 2...
I1202 19:38:57.881852   65508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:57.882139   65508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:38:57.882795   65508 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:57.882962   65508 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:57.883506   65508 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:38:57.900420   65508 ssh_runner.go:195] Run: systemctl --version
I1202 19:38:57.900479   65508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:38:57.918565   65508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:38:58.021311   65508 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-374330 ssh pgrep buildkitd: exit status 1 (271.96036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image build -t localhost/my-image:functional-374330 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-374330 image build -t localhost/my-image:functional-374330 testdata/build --alsologtostderr: (3.281001249s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-374330 image build -t localhost/my-image:functional-374330 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f8b04330a2b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-374330
--> 8179a101ae2
Successfully tagged localhost/my-image:functional-374330
8179a101ae21bbb1adf40da69b32bcdfb657dabf2743d87cc55cfd6d36cd2882
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-374330 image build -t localhost/my-image:functional-374330 testdata/build --alsologtostderr:
I1202 19:38:58.394074   65613 out.go:360] Setting OutFile to fd 1 ...
I1202 19:38:58.394180   65613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:58.394190   65613 out.go:374] Setting ErrFile to fd 2...
I1202 19:38:58.394196   65613 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:38:58.394462   65613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
I1202 19:38:58.395067   65613 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:58.395751   65613 config.go:182] Loaded profile config "functional-374330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 19:38:58.396277   65613 cli_runner.go:164] Run: docker container inspect functional-374330 --format={{.State.Status}}
I1202 19:38:58.413462   65613 ssh_runner.go:195] Run: systemctl --version
I1202 19:38:58.413528   65613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-374330
I1202 19:38:58.431029   65613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/functional-374330/id_rsa Username:docker}
I1202 19:38:58.532125   65613 build_images.go:162] Building image from path: /tmp/build.2979918492.tar
I1202 19:38:58.532192   65613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 19:38:58.539518   65613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2979918492.tar
I1202 19:38:58.543066   65613 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2979918492.tar: stat -c "%s %y" /var/lib/minikube/build/build.2979918492.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2979918492.tar': No such file or directory
I1202 19:38:58.543095   65613 ssh_runner.go:362] scp /tmp/build.2979918492.tar --> /var/lib/minikube/build/build.2979918492.tar (3072 bytes)
I1202 19:38:58.560089   65613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2979918492
I1202 19:38:58.567781   65613 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2979918492 -xf /var/lib/minikube/build/build.2979918492.tar
I1202 19:38:58.575933   65613 crio.go:315] Building image: /var/lib/minikube/build/build.2979918492
I1202 19:38:58.576022   65613 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-374330 /var/lib/minikube/build/build.2979918492 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1202 19:39:01.601543   65613 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-374330 /var/lib/minikube/build/build.2979918492 --cgroup-manager=cgroupfs: (3.025492718s)
I1202 19:39:01.601610   65613 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2979918492
I1202 19:39:01.609585   65613 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2979918492.tar
I1202 19:39:01.619385   65613 build_images.go:218] Built localhost/my-image:functional-374330 from /tmp/build.2979918492.tar
I1202 19:39:01.619485   65613 build_images.go:134] succeeded building to: functional-374330
I1202 19:39:01.619492   65613 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-374330
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image rm kicbase/echo-server:functional-374330 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-374330 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-374330
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-374330
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-374330
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1202 19:41:45.853822    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:45.860174    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:45.871734    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:45.893216    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:45.934676    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:46.015981    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:46.177601    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:46.498923    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:47.140272    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:48.421529    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:50.982795    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:56.105158    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:41:57.357128    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:42:06.346533    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:42:26.828705    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:43:07.790556    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:43:46.175376    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m15.85603961s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (196.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 kubectl -- rollout status deployment/busybox: (4.187944793s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-npkff -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-xjn7v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-zjghb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-npkff -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-xjn7v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-zjghb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-npkff -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-xjn7v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-zjghb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-npkff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-npkff -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-xjn7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-xjn7v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-zjghb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 kubectl -- exec busybox-7b57f96db7-zjghb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node add --alsologtostderr -v 5
E1202 19:44:29.712480    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 node add --alsologtostderr -v 5: (57.11968095s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: (1.048845056s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-791576 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077808303s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 status --output json --alsologtostderr -v 5: (1.023145242s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp testdata/cp-test.txt ha-791576:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576_ha-791576-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test_ha-791576_ha-791576-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576_ha-791576-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test_ha-791576_ha-791576-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576_ha-791576-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test_ha-791576_ha-791576-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp testdata/cp-test.txt ha-791576-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m02:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m02_ha-791576.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test_ha-791576-m02_ha-791576.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m02:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m02_ha-791576-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test_ha-791576-m02_ha-791576-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m02:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m02_ha-791576-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test_ha-791576-m02_ha-791576-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp testdata/cp-test.txt ha-791576-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m03_ha-791576.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m03:/home/docker/cp-test.txt ha-791576-m04:/home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test_ha-791576-m03_ha-791576-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp testdata/cp-test.txt ha-791576-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3436833152/001/cp-test_ha-791576-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576:/home/docker/cp-test_ha-791576-m04_ha-791576.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576 "sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m02:/home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m02 "sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 cp ha-791576-m04:/home/docker/cp-test.txt ha-791576-m03:/home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 ssh -n ha-791576-m03 "sudo cat /home/docker/cp-test_ha-791576-m04_ha-791576-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 node stop m02 --alsologtostderr -v 5: (12.038057716s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: exit status 7 (775.286955ms)

                                                
                                                
-- stdout --
	ha-791576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:45:45.981025   81702 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:45:45.981137   81702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:45:45.981148   81702 out.go:374] Setting ErrFile to fd 2...
	I1202 19:45:45.981153   81702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:45:45.981399   81702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:45:45.981576   81702 out.go:368] Setting JSON to false
	I1202 19:45:45.981610   81702 mustload.go:66] Loading cluster: ha-791576
	I1202 19:45:45.981711   81702 notify.go:221] Checking for updates...
	I1202 19:45:45.982098   81702 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:45:45.982117   81702 status.go:174] checking status of ha-791576 ...
	I1202 19:45:45.982644   81702 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:45:46.002443   81702 status.go:371] ha-791576 host status = "Running" (err=<nil>)
	I1202 19:45:46.002468   81702 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:45:46.002818   81702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576
	I1202 19:45:46.043972   81702 host.go:66] Checking if "ha-791576" exists ...
	I1202 19:45:46.044545   81702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:45:46.044609   81702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576
	I1202 19:45:46.064653   81702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576/id_rsa Username:docker}
	I1202 19:45:46.167626   81702 ssh_runner.go:195] Run: systemctl --version
	I1202 19:45:46.174371   81702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:45:46.186996   81702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 19:45:46.255315   81702 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-02 19:45:46.243986325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 19:45:46.255842   81702 kubeconfig.go:125] found "ha-791576" server: "https://192.168.49.254:8443"
	I1202 19:45:46.255885   81702 api_server.go:166] Checking apiserver status ...
	I1202 19:45:46.255944   81702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:45:46.267805   81702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	I1202 19:45:46.276219   81702 api_server.go:182] apiserver freezer: "9:freezer:/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio/crio-ec1236f2c069546dcae36ca1bccad7c2e750e46875e835ab27799dd989c5cd03"
	I1202 19:45:46.276283   81702 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f426f8269bd97c2283cd51b8c971707538c05bf6555479012901b37cb9631d94/crio/crio-ec1236f2c069546dcae36ca1bccad7c2e750e46875e835ab27799dd989c5cd03/freezer.state
	I1202 19:45:46.284084   81702 api_server.go:204] freezer state: "THAWED"
	I1202 19:45:46.284113   81702 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 19:45:46.292480   81702 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 19:45:46.292507   81702 status.go:463] ha-791576 apiserver status = Running (err=<nil>)
	I1202 19:45:46.292518   81702 status.go:176] ha-791576 status: &{Name:ha-791576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:45:46.292538   81702 status.go:174] checking status of ha-791576-m02 ...
	I1202 19:45:46.292851   81702 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:45:46.311362   81702 status.go:371] ha-791576-m02 host status = "Stopped" (err=<nil>)
	I1202 19:45:46.311386   81702 status.go:384] host is not running, skipping remaining checks
	I1202 19:45:46.311394   81702 status.go:176] ha-791576-m02 status: &{Name:ha-791576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:45:46.311418   81702 status.go:174] checking status of ha-791576-m03 ...
	I1202 19:45:46.311819   81702 cli_runner.go:164] Run: docker container inspect ha-791576-m03 --format={{.State.Status}}
	I1202 19:45:46.330121   81702 status.go:371] ha-791576-m03 host status = "Running" (err=<nil>)
	I1202 19:45:46.330145   81702 host.go:66] Checking if "ha-791576-m03" exists ...
	I1202 19:45:46.330446   81702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m03
	I1202 19:45:46.346158   81702 host.go:66] Checking if "ha-791576-m03" exists ...
	I1202 19:45:46.346636   81702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:45:46.346683   81702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m03
	I1202 19:45:46.364413   81702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m03/id_rsa Username:docker}
	I1202 19:45:46.467113   81702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:45:46.479895   81702 kubeconfig.go:125] found "ha-791576" server: "https://192.168.49.254:8443"
	I1202 19:45:46.479925   81702 api_server.go:166] Checking apiserver status ...
	I1202 19:45:46.479965   81702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:45:46.491114   81702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	I1202 19:45:46.500043   81702 api_server.go:182] apiserver freezer: "9:freezer:/docker/8177330b5e81d91bea824e231fe813a66447e2776b5b608a00fd89ebe3fefe7c/crio/crio-3e771ddc038e5771a8332283cb6ee0c26f59010f06af5cef32ccf049641cc542"
	I1202 19:45:46.500134   81702 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8177330b5e81d91bea824e231fe813a66447e2776b5b608a00fd89ebe3fefe7c/crio/crio-3e771ddc038e5771a8332283cb6ee0c26f59010f06af5cef32ccf049641cc542/freezer.state
	I1202 19:45:46.512202   81702 api_server.go:204] freezer state: "THAWED"
	I1202 19:45:46.512241   81702 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 19:45:46.521340   81702 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 19:45:46.521368   81702 status.go:463] ha-791576-m03 apiserver status = Running (err=<nil>)
	I1202 19:45:46.521377   81702 status.go:176] ha-791576-m03 status: &{Name:ha-791576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:45:46.521393   81702 status.go:174] checking status of ha-791576-m04 ...
	I1202 19:45:46.521757   81702 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:45:46.542092   81702 status.go:371] ha-791576-m04 host status = "Running" (err=<nil>)
	I1202 19:45:46.542117   81702 host.go:66] Checking if "ha-791576-m04" exists ...
	I1202 19:45:46.542392   81702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791576-m04
	I1202 19:45:46.560259   81702 host.go:66] Checking if "ha-791576-m04" exists ...
	I1202 19:45:46.560562   81702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 19:45:46.560622   81702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791576-m04
	I1202 19:45:46.588367   81702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/ha-791576-m04/id_rsa Username:docker}
	I1202 19:45:46.690691   81702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:45:46.704825   81702 status.go:176] ha-791576-m04 status: &{Name:ha-791576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 node start m02 --alsologtostderr -v 5: (28.798143306s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: (1.368375099s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.333275783s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 stop --alsologtostderr -v 5: (24.009778633s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: exit status 7 (116.025375ms)

                                                
                                                
-- stdout --
	ha-791576
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791576-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:55:44.068496   93224 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:55:44.068612   93224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.068621   93224 out.go:374] Setting ErrFile to fd 2...
	I1202 19:55:44.068627   93224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:55:44.069188   93224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 19:55:44.069395   93224 out.go:368] Setting JSON to false
	I1202 19:55:44.069426   93224 mustload.go:66] Loading cluster: ha-791576
	I1202 19:55:44.069485   93224 notify.go:221] Checking for updates...
	I1202 19:55:44.069880   93224 config.go:182] Loaded profile config "ha-791576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:55:44.069900   93224 status.go:174] checking status of ha-791576 ...
	I1202 19:55:44.070417   93224 cli_runner.go:164] Run: docker container inspect ha-791576 --format={{.State.Status}}
	I1202 19:55:44.090162   93224 status.go:371] ha-791576 host status = "Stopped" (err=<nil>)
	I1202 19:55:44.090188   93224 status.go:384] host is not running, skipping remaining checks
	I1202 19:55:44.090195   93224 status.go:176] ha-791576 status: &{Name:ha-791576 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:55:44.090227   93224 status.go:174] checking status of ha-791576-m02 ...
	I1202 19:55:44.090550   93224 cli_runner.go:164] Run: docker container inspect ha-791576-m02 --format={{.State.Status}}
	I1202 19:55:44.115238   93224 status.go:371] ha-791576-m02 host status = "Stopped" (err=<nil>)
	I1202 19:55:44.115263   93224 status.go:384] host is not running, skipping remaining checks
	I1202 19:55:44.115270   93224 status.go:176] ha-791576-m02 status: &{Name:ha-791576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 19:55:44.115289   93224 status.go:174] checking status of ha-791576-m04 ...
	I1202 19:55:44.115566   93224 cli_runner.go:164] Run: docker container inspect ha-791576-m04 --format={{.State.Status}}
	I1202 19:55:44.132496   93224 status.go:371] ha-791576-m04 host status = "Stopped" (err=<nil>)
	I1202 19:55:44.132528   93224 status.go:384] host is not running, skipping remaining checks
	I1202 19:55:44.132536   93224 status.go:176] ha-791576-m04 status: &{Name:ha-791576-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (95.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 node add --control-plane --alsologtostderr -v 5
E1202 20:03:20.450207    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:03:29.246888    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 node add --control-plane --alsologtostderr -v 5: (1m33.98647093s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-791576 status --alsologtostderr -v 5: (1.08444172s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (95.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-289137 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-289137 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.476079424s)
--- PASS: TestJSONOutput/start/Command (78.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-289137 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-289137 --output=json --user=testUser: (5.833948671s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-972377 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-972377 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.347335ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"19347be9-3d79-4786-9a72-48a67999d878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-972377] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7f1f7fa-34d4-44fd-9164-a7d673b37328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"8d0b4b0a-82da-4aa1-a7eb-c18ec7c9882c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05f6d071-bdf9-41c8-a8ba-3bd0f382ad75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig"}}
	{"specversion":"1.0","id":"c08729a2-c3d7-4423-9f2e-2ba5cb5391da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube"}}
	{"specversion":"1.0","id":"55bae03a-73fe-4b20-b4d3-11c43efdd565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5162a642-abc8-4c54-aaa1-c1a97a8e571e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19f7e8e2-5c37-4cf0-aef0-329e897f8169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-972377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-972377
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (57.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-575663 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-575663 --network=: (55.047602999s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-575663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-575663
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-575663: (2.184007883s)
--- PASS: TestKicCustomNetwork/create_custom_network (57.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-953337 --network=bridge
E1202 20:06:45.857826    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:06:57.357110    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-953337 --network=bridge: (34.164170468s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-953337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-953337
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-953337: (2.402963005s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.60s)

                                                
                                    
x
+
TestKicExistingNetwork (33.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 20:07:00.936208    4470 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 20:07:00.951751    4470 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 20:07:00.952609    4470 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 20:07:00.952641    4470 cli_runner.go:164] Run: docker network inspect existing-network
W1202 20:07:00.968034    4470 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 20:07:00.968065    4470 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 20:07:00.968081    4470 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 20:07:00.968207    4470 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 20:07:00.985390    4470 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-56dad1208e3b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:3e:f9:4b:bf:54} reservation:<nil>}
I1202 20:07:00.985711    4470 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195f3e0}
I1202 20:07:00.985738    4470 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 20:07:00.985788    4470 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 20:07:01.047178    4470 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-203073 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-203073 --network=existing-network: (30.892502309s)
helpers_test.go:175: Cleaning up "existing-network-203073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-203073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-203073: (2.11524223s)
I1202 20:07:34.073550    4470 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.15s)

                                                
                                    
x
+
TestKicCustomSubnet (35.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-980019 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-980019 --subnet=192.168.60.0/24: (32.971193166s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-980019 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-980019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-980019
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-980019: (2.206132295s)
--- PASS: TestKicCustomSubnet (35.20s)

                                                
                                    
x
+
TestKicStaticIP (35.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-265841 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-265841 --static-ip=192.168.200.200: (33.387346409s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-265841 ip
helpers_test.go:175: Cleaning up "static-ip-265841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-265841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-265841: (2.269829832s)
--- PASS: TestKicStaticIP (35.81s)

                                                
                                    
x
+
TestMainNoArgs (0.1s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.10s)

                                                
                                    
x
+
TestMinikubeProfile (70.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-668339 --driver=docker  --container-runtime=crio
E1202 20:08:46.175825    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-668339 --driver=docker  --container-runtime=crio: (30.337062182s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-671194 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-671194 --driver=docker  --container-runtime=crio: (34.579624048s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-668339
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-671194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-671194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-671194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-671194: (2.231031097s)
helpers_test.go:175: Cleaning up "first-668339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-668339
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-668339: (2.072239787s)
--- PASS: TestMinikubeProfile (70.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-980814 --memory=3072 --mount-string /tmp/TestMountStartserial2866690629/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-980814 --memory=3072 --mount-string /tmp/TestMountStartserial2866690629/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.727173385s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-980814 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-982562 --memory=3072 --mount-string /tmp/TestMountStartserial2866690629/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-982562 --memory=3072 --mount-string /tmp/TestMountStartserial2866690629/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.845339479s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-982562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-980814 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-980814 --alsologtostderr -v=5: (1.724468217s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-982562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-982562
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-982562: (1.330949379s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-982562
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-982562: (7.519291533s)
--- PASS: TestMountStart/serial/RestartStopped (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-982562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305909 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 20:11:45.851671    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:11:57.357120    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305909 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.489758448s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-305909 -- rollout status deployment/busybox: (3.085086409s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-9hg5h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-fzcmp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-9hg5h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-fzcmp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-9hg5h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-fzcmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-9hg5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-9hg5h -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-fzcmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-305909 -- exec busybox-7b57f96db7-fzcmp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-305909 -v=5 --alsologtostderr
E1202 20:13:46.175433    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-305909 -v=5 --alsologtostderr: (57.674673878s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-305909 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp testdata/cp-test.txt multinode-305909:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile996550590/001/cp-test_multinode-305909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909:/home/docker/cp-test.txt multinode-305909-m02:/home/docker/cp-test_multinode-305909_multinode-305909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test_multinode-305909_multinode-305909-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909:/home/docker/cp-test.txt multinode-305909-m03:/home/docker/cp-test_multinode-305909_multinode-305909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test_multinode-305909_multinode-305909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp testdata/cp-test.txt multinode-305909-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile996550590/001/cp-test_multinode-305909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m02:/home/docker/cp-test.txt multinode-305909:/home/docker/cp-test_multinode-305909-m02_multinode-305909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test_multinode-305909-m02_multinode-305909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m02:/home/docker/cp-test.txt multinode-305909-m03:/home/docker/cp-test_multinode-305909-m02_multinode-305909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test_multinode-305909-m02_multinode-305909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp testdata/cp-test.txt multinode-305909-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile996550590/001/cp-test_multinode-305909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m03:/home/docker/cp-test.txt multinode-305909:/home/docker/cp-test_multinode-305909-m03_multinode-305909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909 "sudo cat /home/docker/cp-test_multinode-305909-m03_multinode-305909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 cp multinode-305909-m03:/home/docker/cp-test.txt multinode-305909-m02:/home/docker/cp-test_multinode-305909-m03_multinode-305909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 ssh -n multinode-305909-m02 "sudo cat /home/docker/cp-test_multinode-305909-m03_multinode-305909-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-305909 node stop m03: (1.318630388s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305909 status: exit status 7 (529.495921ms)

                                                
                                                
-- stdout --
	multinode-305909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-305909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-305909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr: exit status 7 (541.377033ms)

                                                
                                                
-- stdout --
	multinode-305909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-305909-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-305909-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:14:06.592463  145082 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:14:06.592640  145082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:14:06.592671  145082 out.go:374] Setting ErrFile to fd 2...
	I1202 20:14:06.592692  145082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:14:06.592972  145082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:14:06.593179  145082 out.go:368] Setting JSON to false
	I1202 20:14:06.593234  145082 mustload.go:66] Loading cluster: multinode-305909
	I1202 20:14:06.593317  145082 notify.go:221] Checking for updates...
	I1202 20:14:06.594741  145082 config.go:182] Loaded profile config "multinode-305909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:14:06.594791  145082 status.go:174] checking status of multinode-305909 ...
	I1202 20:14:06.595550  145082 cli_runner.go:164] Run: docker container inspect multinode-305909 --format={{.State.Status}}
	I1202 20:14:06.612900  145082 status.go:371] multinode-305909 host status = "Running" (err=<nil>)
	I1202 20:14:06.612921  145082 host.go:66] Checking if "multinode-305909" exists ...
	I1202 20:14:06.613218  145082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-305909
	I1202 20:14:06.641763  145082 host.go:66] Checking if "multinode-305909" exists ...
	I1202 20:14:06.642112  145082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:14:06.642195  145082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-305909
	I1202 20:14:06.659379  145082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/multinode-305909/id_rsa Username:docker}
	I1202 20:14:06.763359  145082 ssh_runner.go:195] Run: systemctl --version
	I1202 20:14:06.770184  145082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:14:06.783263  145082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:14:06.858172  145082 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 20:14:06.849288981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:14:06.858726  145082 kubeconfig.go:125] found "multinode-305909" server: "https://192.168.67.2:8443"
	I1202 20:14:06.858761  145082 api_server.go:166] Checking apiserver status ...
	I1202 20:14:06.858817  145082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:14:06.870767  145082 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1202 20:14:06.879178  145082 api_server.go:182] apiserver freezer: "9:freezer:/docker/47b86d32c12be75edd63ea51083250901b37fc61da3103315931611050508b17/crio/crio-861ee8b623b654e2e305b991efa108305adb1d0033f1626b7f938437de941c38"
	I1202 20:14:06.879261  145082 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/47b86d32c12be75edd63ea51083250901b37fc61da3103315931611050508b17/crio/crio-861ee8b623b654e2e305b991efa108305adb1d0033f1626b7f938437de941c38/freezer.state
	I1202 20:14:06.887392  145082 api_server.go:204] freezer state: "THAWED"
	I1202 20:14:06.887420  145082 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 20:14:06.896851  145082 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 20:14:06.896891  145082 status.go:463] multinode-305909 apiserver status = Running (err=<nil>)
	I1202 20:14:06.896908  145082 status.go:176] multinode-305909 status: &{Name:multinode-305909 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:14:06.896928  145082 status.go:174] checking status of multinode-305909-m02 ...
	I1202 20:14:06.897220  145082 cli_runner.go:164] Run: docker container inspect multinode-305909-m02 --format={{.State.Status}}
	I1202 20:14:06.914225  145082 status.go:371] multinode-305909-m02 host status = "Running" (err=<nil>)
	I1202 20:14:06.914248  145082 host.go:66] Checking if "multinode-305909-m02" exists ...
	I1202 20:14:06.914551  145082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-305909-m02
	I1202 20:14:06.931048  145082 host.go:66] Checking if "multinode-305909-m02" exists ...
	I1202 20:14:06.931379  145082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:14:06.931424  145082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-305909-m02
	I1202 20:14:06.948649  145082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22021-2526/.minikube/machines/multinode-305909-m02/id_rsa Username:docker}
	I1202 20:14:07.050848  145082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:14:07.063754  145082 status.go:176] multinode-305909-m02 status: &{Name:multinode-305909-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:14:07.063789  145082 status.go:174] checking status of multinode-305909-m03 ...
	I1202 20:14:07.064086  145082 cli_runner.go:164] Run: docker container inspect multinode-305909-m03 --format={{.State.Status}}
	I1202 20:14:07.081614  145082 status.go:371] multinode-305909-m03 host status = "Stopped" (err=<nil>)
	I1202 20:14:07.081637  145082 status.go:384] host is not running, skipping remaining checks
	I1202 20:14:07.081648  145082 status.go:176] multinode-305909-m03 status: &{Name:multinode-305909-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-305909 node start m03 -v=5 --alsologtostderr: (7.356360831s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305909
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-305909
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-305909: (25.038403795s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305909 --wait=true -v=5 --alsologtostderr
E1202 20:14:48.924331    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305909 --wait=true -v=5 --alsologtostderr: (51.088271185s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305909
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-305909 node delete m03: (4.935345497s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-305909 stop: (23.774752545s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305909 status: exit status 7 (94.781912ms)

                                                
                                                
-- stdout --
	multinode-305909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-305909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr: exit status 7 (99.653713ms)

                                                
                                                
-- stdout --
	multinode-305909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-305909-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:16:01.075268  152912 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:16:01.075458  152912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:16:01.075472  152912 out.go:374] Setting ErrFile to fd 2...
	I1202 20:16:01.075479  152912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:16:01.075775  152912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:16:01.075991  152912 out.go:368] Setting JSON to false
	I1202 20:16:01.076040  152912 mustload.go:66] Loading cluster: multinode-305909
	I1202 20:16:01.076132  152912 notify.go:221] Checking for updates...
	I1202 20:16:01.076505  152912 config.go:182] Loaded profile config "multinode-305909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:16:01.076525  152912 status.go:174] checking status of multinode-305909 ...
	I1202 20:16:01.077362  152912 cli_runner.go:164] Run: docker container inspect multinode-305909 --format={{.State.Status}}
	I1202 20:16:01.096975  152912 status.go:371] multinode-305909 host status = "Stopped" (err=<nil>)
	I1202 20:16:01.096996  152912 status.go:384] host is not running, skipping remaining checks
	I1202 20:16:01.097003  152912 status.go:176] multinode-305909 status: &{Name:multinode-305909 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:16:01.097028  152912 status.go:174] checking status of multinode-305909-m02 ...
	I1202 20:16:01.097342  152912 cli_runner.go:164] Run: docker container inspect multinode-305909-m02 --format={{.State.Status}}
	I1202 20:16:01.123765  152912 status.go:371] multinode-305909-m02 host status = "Stopped" (err=<nil>)
	I1202 20:16:01.123783  152912 status.go:384] host is not running, skipping remaining checks
	I1202 20:16:01.123797  152912 status.go:176] multinode-305909-m02 status: &{Name:multinode-305909-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305909 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 20:16:45.851890    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305909 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.961616497s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-305909 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-305909
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305909-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-305909-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.976843ms)

                                                
                                                
-- stdout --
	* [multinode-305909-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-305909-m02' is duplicated with machine name 'multinode-305909-m02' in profile 'multinode-305909'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-305909-m03 --driver=docker  --container-runtime=crio
E1202 20:16:57.357028    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-305909-m03 --driver=docker  --container-runtime=crio: (33.337589036s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-305909
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-305909: exit status 80 (344.821342ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-305909 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-305909-m03 already exists in multinode-305909-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-305909-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-305909-m03: (2.097841019s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.93s)

                                                
                                    
x
+
TestPreload (149.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-722126 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1202 20:18:46.175802    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-722126 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m30.63958382s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-722126 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-722126 image pull gcr.io/k8s-minikube/busybox: (2.096633307s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-722126
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-722126: (5.910835954s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-722126 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1202 20:20:00.452005    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-722126 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (48.518591688s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-722126 image list
helpers_test.go:175: Cleaning up "test-preload-722126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-722126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-722126: (2.511237557s)
--- PASS: TestPreload (149.94s)

                                                
                                    
x
+
TestScheduledStopUnix (108.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-242340 --memory=3072 --driver=docker  --container-runtime=crio
E1202 20:20:09.250008    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-242340 --memory=3072 --driver=docker  --container-runtime=crio: (31.370848384s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-242340 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:20:36.332732  166912 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:20:36.332928  166912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:20:36.332953  166912 out.go:374] Setting ErrFile to fd 2...
	I1202 20:20:36.332973  166912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:20:36.333267  166912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:20:36.333567  166912 out.go:368] Setting JSON to false
	I1202 20:20:36.333805  166912 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:20:36.334238  166912 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:20:36.334350  166912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/config.json ...
	I1202 20:20:36.334632  166912 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:20:36.334800  166912 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-242340 -n scheduled-stop-242340
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-242340 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:20:36.798806  167004 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:20:36.799070  167004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:20:36.799092  167004 out.go:374] Setting ErrFile to fd 2...
	I1202 20:20:36.799112  167004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:20:36.799438  167004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:20:36.799778  167004 out.go:368] Setting JSON to false
	I1202 20:20:36.800031  167004 daemonize_unix.go:73] killing process 166929 as it is an old scheduled stop
	I1202 20:20:36.803849  167004 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:20:36.804441  167004 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:20:36.804581  167004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/config.json ...
	I1202 20:20:36.804795  167004 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:20:36.804949  167004 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 20:20:36.810139    4470 retry.go:31] will retry after 109.776µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.811355    4470 retry.go:31] will retry after 209.413µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.812428    4470 retry.go:31] will retry after 292.02µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.813524    4470 retry.go:31] will retry after 253.861µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.814638    4470 retry.go:31] will retry after 381.904µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.815747    4470 retry.go:31] will retry after 603.996µs: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.816828    4470 retry.go:31] will retry after 1.582247ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.819029    4470 retry.go:31] will retry after 1.336254ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.821226    4470 retry.go:31] will retry after 2.130159ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.824462    4470 retry.go:31] will retry after 2.903992ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.827693    4470 retry.go:31] will retry after 6.131562ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.834223    4470 retry.go:31] will retry after 8.198423ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.843518    4470 retry.go:31] will retry after 12.373799ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.856742    4470 retry.go:31] will retry after 11.272997ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.868995    4470 retry.go:31] will retry after 18.353078ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
I1202 20:20:36.889512    4470 retry.go:31] will retry after 40.959446ms: open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-242340 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-242340 -n scheduled-stop-242340
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-242340
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-242340 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:21:02.754730  167371 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:21:02.754905  167371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:21:02.754935  167371 out.go:374] Setting ErrFile to fd 2...
	I1202 20:21:02.754955  167371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:21:02.755228  167371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2526/.minikube/bin
	I1202 20:21:02.755496  167371 out.go:368] Setting JSON to false
	I1202 20:21:02.755630  167371 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:21:02.756028  167371 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:21:02.756134  167371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/scheduled-stop-242340/config.json ...
	I1202 20:21:02.756355  167371 mustload.go:66] Loading cluster: scheduled-stop-242340
	I1202 20:21:02.756517  167371 config.go:182] Loaded profile config "scheduled-stop-242340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1202 20:21:45.851515    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-242340
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-242340: exit status 7 (66.924206ms)

                                                
                                                
-- stdout --
	scheduled-stop-242340
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-242340 -n scheduled-stop-242340
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-242340 -n scheduled-stop-242340: exit status 7 (66.65845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-242340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-242340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-242340: (5.263411346s)
--- PASS: TestScheduledStopUnix (108.25s)

                                                
                                    
x
+
TestInsufficientStorage (10.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-078962 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1202 20:21:57.357115    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-078962 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.770627358s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b755e9d5-c680-4ece-81bc-d9107923ae2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-078962] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a084467c-412e-4861-85b6-1ecbf312eb37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"47d4ab84-4df2-4775-85e4-7c4dfbc0e6a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b95e745c-5930-40c2-bcba-8fd5e70e1ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig"}}
	{"specversion":"1.0","id":"1a90ea2f-9dfe-498d-a875-ae3c794dc1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube"}}
	{"specversion":"1.0","id":"7b4f76e4-fd36-4ed4-ae9b-6e34923205b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"49683d0c-e906-4776-a38c-0dbd6a1ebe18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"86e91145-9f45-4bca-b972-18591cfc8a92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"77edeff1-9934-4e37-aad3-4900cdefd658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5a99d521-0559-486d-b4ae-ce2b37fe8ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2addd5a7-fb4d-452a-a810-bd66e4fc3fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1512bb1a-e237-45da-a33b-48d76c0f50bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-078962\" primary control-plane node in \"insufficient-storage-078962\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"02e1dbfe-80da-410b-b5ad-1469e5336ede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffe1389c-f9b5-4a60-a280-f4b90dd5b235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"35b4e8f4-ae65-463c-be68-bfe86e42d7bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-078962 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-078962 --output=json --layout=cluster: exit status 7 (307.570122ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-078962","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-078962","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 20:22:01.222381  169083 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-078962" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-078962 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-078962 --output=json --layout=cluster: exit status 7 (349.521213ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-078962","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-078962","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 20:22:01.570504  169148 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-078962" does not appear in /home/jenkins/minikube-integration/22021-2526/kubeconfig
	E1202 20:22:01.582592  169148 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/insufficient-storage-078962/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-078962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-078962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-078962: (1.978959176s)
--- PASS: TestInsufficientStorage (10.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1253348551 start -p running-upgrade-568729 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1253348551 start -p running-upgrade-568729 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.206015541s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-568729 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-568729 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.609899451s)
helpers_test.go:175: Cleaning up "running-upgrade-568729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-568729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-568729: (2.034912504s)
--- PASS: TestRunningBinaryUpgrade (62.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1922167019 start -p missing-upgrade-210819 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1922167019 start -p missing-upgrade-210819 --memory=3072 --driver=docker  --container-runtime=crio: (1m19.718259474s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-210819
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-210819
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-210819 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 20:23:46.175674    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-210819 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.389377482s)
helpers_test.go:175: Cleaning up "missing-upgrade-210819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-210819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-210819: (2.55561035s)
--- PASS: TestMissingContainerUpgrade (143.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.626506ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-778048] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-2526/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2526/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-778048 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-778048 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.826031241s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-778048 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.417943311s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-778048 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-778048 status -o json: exit status 2 (394.158834ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-778048","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-778048
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-778048: (2.332233585s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-778048 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.014181641s)
--- PASS: TestNoKubernetes/serial/Start (9.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22021-2526/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-778048 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-778048 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.861526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-778048
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-778048: (1.395134494s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-778048 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-778048 --driver=docker  --container-runtime=crio: (7.705595152s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-778048 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-778048 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.546638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (11.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1234165900 start -p stopped-upgrade-085945 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1234165900 start -p stopped-upgrade-085945 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.648657173s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1234165900 -p stopped-upgrade-085945 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1234165900 -p stopped-upgrade-085945 stop: (1.239963199s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-085945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1202 20:26:45.851022    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:26:57.357685    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:28:46.175729    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-535807/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-085945 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.397149156s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (299.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-085945
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-085945: (1.69262387s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                    
x
+
TestPause/serial/Start (82.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-774682 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1202 20:31:28.925724    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:31:45.852715    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/functional-374330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:31:57.357420    4470 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2526/.minikube/profiles/addons-391119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-774682 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.197538417s)
--- PASS: TestPause/serial/Start (82.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-774682 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-774682 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.055279025s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.07s)

                                                
                                    

Test skip (35/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.17
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 18:49:18.981640    4470 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1202 18:49:19.096800    4470 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
W1202 18:49:19.148333    4470 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-936869 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-936869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-936869
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard